Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

Intel C/C++ Compiler Beats GCC 580

jfonseca writes: "An article from Open Magazine claims the new Intel C/C++ compiler's geometric mean performance on multiple kernels compiled through it reached 47% improvement over GCC. They also predict the end to the ubiquitous GNU compiler due to low performance. Many other compiler/platform combinations also compared. A bit pretentious, yet an interesting read."
This discussion has been archived. No new comments can be posted.

Intel C/C++ Compiler Beats GCC

Comments Filter:
  • GCC will live (Score:5, Insightful)

    by JanneM ( 7445 ) on Saturday January 26, 2002 @10:02AM (#2906100) Homepage
    GCC is the only compiler you can count on being present on every Linux (or BSD) system. Thus most code released is going to continue being compilable by gcc, and it's going to remain the baseline for source distribution.

    /Janne
    • Ye of little faith, I fully expect Intel to release its complier free with source and have it bundled with every Linux distro. I'm so confident, I'll hold my breath until this happens. I'll reply to this thread when it's done.

      /me holds breath...blush...gasp!...ack!...hrmmmm
    • Re:GCC will live (Score:2, Insightful)

      by ackthpt ( 218170 )
      GCC also you can pretty much trust to work with AMD and Cyrix CPUs, not that I'm suggesting anyone might get ideas...
    • Re:GCC will live (Score:2, Insightful)

      by Anonymous Coward
      Oh gee - I'm gonna run out to get the Intel compiler for my PPC platform - uh, hmmm, doesn't seem to be one. How 'bout my Sparc? Hmmm - well, then there's that Alpha... nope - well darn it, Intel's compiler is kinda useless when it only runs on Intel in a world of multiple platforms... Fortunantly, GCC just about compiles on everything with a CPU - so what if it's a little slower - that's just another donut and cup of nice coffee for me and an excuse to kick back and relax between hectic coding sessions...
      • by yerricde ( 125198 ) on Saturday January 26, 2002 @02:48PM (#2907139) Homepage Journal

        Fortunantly, GCC just about compiles on everything with a CPU

        "Unfortunantly," last time I checked, GCC doesn't generate code for 8086 or 80286 processors, only i386 and up, so you can't build an OS that's backwards-compatible with legacy 16-bit apps [freedos.org] with GCC.

    • by Keith Gabryelski ( 65602 ) on Saturday January 26, 2002 @12:17PM (#2906487) Homepage
      Let me get this straight.

      Intel is happy that their compiler can beat another compiler?

      I'd hope so... They designed the damn chips, had a head start, have cash money to buy a few smart compiler dudes .. you'd think they'd have enough pride to work on a compiler until it was the best it could be.

      It is interesting to see Intel pick on GCC. They are in the CHIP BUSINESS... A compiler (any compiler) helps them.

      You'd think THEY would be the ones to release a compiler into open source so they could get the rest of the world looking at how to do even more optimizations for their chips.

      GCC has been out there for well over a decade. Open to anyone to improve ... or just stare at.

      Intel could show us all how to make a better compiler. Open up their source code... but someone might improve on their techniques and that would make them sad. So, instead they berate a compiler that has done them only a service.

      Just my thoughts. Yours may vary.
    • by Anonymous Brave Guy ( 457657 ) on Saturday January 26, 2002 @01:40PM (#2906850)
      GCC is the only compiler you can count on being present on every Linux (or BSD) system. Thus most code released is going to continue being compilable by gcc, and it's going to remain the baseline for source distribution.

      Is that a good thing? Internet Explorer dominates the web browser market for much the same reasons, whether or not there are better alternatives available. Now we have a proliferation of web sites that only work with IE instead of standard HTML, and all the other well-documented problems.

      It would be an advantage for the Linux world if it was easier to port code from other platforms. Most of that code isn't written with GCC, it's written with VC++, C++ Builder, CodeWarrior, etc. If you're going to do this, standards compliance and ease of portability are very important.

      I don't know how good GCC is these days; it used to have quite a good reputation for standard compliance and quality of generated code, but that was a couple of years back. If it hasn't kept up -- I said "if", because I don't think this article demonstrates that either way -- and the Linux community religiously stick with it based on philosophical arguments rather than technical merit, surely they'll just be shooting themselves in the foot?

  • Not open source (Score:4, Insightful)

    by Nate B. ( 2907 ) on Saturday January 26, 2002 @10:03AM (#2906104) Homepage Journal
    The article itself states the compiler itself is not open source. So how's this going to cause it to be chosen over GCC when it (probably) won't be distributed to the same degree as GCC?

    Q: Why do people use MS-Office?
    A: Because its there.

    Q: Why will people use GCC?
    A: Because its there!

    Same concept, really. Most Free Software will continue to be built with GCC until Intel releases this compiler under the GPL, performance not withstanding.
    • Re:Not open source (Score:2, Interesting)

      by Sunda666 ( 146299 )
      Compilers like this will be used mainly by performance freaks, like AutoDESK (it still exists?), Adobe and folks that do intensive-cpu apps and have the cash to pay for an improved compiler to have and edge over competition (see? photoshop xxx rotates a picture 11 degrees faster than paintshop yyy...).

      Being an user of a special compiler myself (SDS's cc68000), I think that this kind of compiler has a niche market, and the good old GCC is still the most widely used, and probably will be for ages.

      Altough, If this new intel compiler does ELF, I can see MandrakeSoft or RedHat building their RPMs (glibc too - yay!) with it and claiming they are faster than the competition ;-).
    • by fireboy1919 ( 257783 ) <rustyp@NoSpam.freeshell.org> on Saturday January 26, 2002 @10:28AM (#2906179) Homepage Journal
      ...at a conference I went to on computer vision. You see, intel also has an optimized computer vision library. They began their talk for the day with the statement "Intel is in the business of selling chips. However, everyone already has enough processor power to do word processing and that sort of thing. We need people to make more complicated applications so that it make these faster chis we come out with worthwhile."
      I imagine that similar logic applies to their compiler: they give it away for free (binary version, so they can control it), but build in the hooks that make it work faster with their newer chips than with the competitions' while at the same time encouraging people to write more CPU intensive programs because they have the power to do so.
      Ultimately, they succeed at their real goal: to sell more chips. By the way, AFAIK, Intel still gives away its compiler in binary form, though only for Windows. Of course, the last time I checked was a year and a half ago...
    • by Karma Star ( 549944 ) on Saturday January 26, 2002 @12:19PM (#2906493) Journal
      We use Kai (or KCC) at work, and it is truely a remarkable product. It's a two-stage compiler - it generates C code native to the platform your working on, then calls the compiler/linker for that platform to compile it. The idea here is that the native compilers for any given platform will be able to optimize for that platform. So Kai just optimizes what it can and dumps it out into C code, then calls the native compiler/linker with optimizations on to recompile that code into the executable. The Kai C++ compiler was ported to Windows, Solaris, Linux, IRIX, etc., so there was the added benefit that any code written with Kai in mind would compile easily across other machines.

      Kai and GCC are very similar in concept, except that Kai was a bit of a "higher end" compiler. Think of GCC being the Toyota of compilers, and KCC being the Lexus of compilers. You may pay the extra bucks just for the optimization/cross-platform abilities that Kai has - but most normal folk don't need it.

      Unfortunately, Kai got purchased by Intel, and (from what I see on their site [kai.com]) they seem to be dropping the other platforms to support only intel. Really, the Intel compiler is really the Kai compiler, but only for Intel. In fact, Kai (Kuck and Associates Inc.), is now part of Intel. Personally, I think this sucks, since Kai really is a superior product compared to any other C++ compiler out there, if you will to pay the extra $$$...
  • by Rushuru ( 135939 ) on Saturday January 26, 2002 @10:03AM (#2906106)
    As long as intel's compiler is not GPL and does not support as many architectures as gcc does, I don't think gcc will vanish any time soon.

    Plus everytime benchmarks showed that a closed source product was faster/better than the free software counterpart, the open source community worked hard to improve its champion.

    gcc is dead, long live gcc
    • by nusuth ( 520833 ) <oooo_0000us.yahoo@com> on Saturday January 26, 2002 @10:19AM (#2906147) Homepage
      Gcc's speed sucks because of its fundemental design focus on supporting as many languages as possible and being avaliable on as many platforms as possible instead of optimizing for a particular language on a particular platform. Although code generator of gcc can use a lot of improvement, its speed can't be taken to levels possible with tailored compilers. If an open source alternative that can beat intel's compiler comes along, it won't be based on gcc.
      • As someone who spent about twenty years of his life as a professional compiler writer, no, that's not the problem. My company marketed compilers for Pascal, Modula-2, C, C++ on the PDP-11, Vax, M68K, Sparc, N3200, MIPS and x86 (not all languages on all platforms). All based on a single technology.

        We routinely beat the system vendor's offerings on benchmarks and (more importantly) real programs.

        And we went broke a decade ago, oh well. Compilers became a commodity and we didn't figure out the consequences in time...
    • by Ace Rimmer ( 179561 ) on Saturday January 26, 2002 @10:27AM (#2906175)
      Exactly, there is something missing in the article
      • The Intel's compiler does not support anything else but x86. (gcc works much more generally) - this is an advantage for Intel's compiler since they don't have to mess up with improvements which would break the result on another CPU arch.
      • They ran synthetic benchmarks which are often misleading (i I took gunzip as a part of my test only a slight modificiations of 3 instructions within the main loop would certainly give me very different results)
      • What gcc version did they measure? GCC 3.x (which they obviously haven't used) has at least 10% performance boost than old 2.95 on average (it may differ a lot for specialized tasks)

      This sound somewhat like a bit biased comparison - even though I think that Intel's compiler is indeed better in x86 optimization - most of gcc developers would confirm this...
  • Is it surprising? (Score:5, Interesting)

    by Chazmati ( 214538 ) on Saturday January 26, 2002 @10:05AM (#2906108)
    I guess I'm not that surprised that the corporation that designed the CPU would produce a more optimized compiler.

    The interesting thing is that the Intel compiler's code ran at 'virtually identical' speeds on an Athlon.
    • by Shiny Metal S. ( 544229 ) on Saturday January 26, 2002 @10:27AM (#2906177) Homepage
      I am, however, surprised with one thing: Why doesn't Intel try to improve the GCC itself? They sell hardware, after all, and it would really benefit them, when they could say "Our 1GHz CPU is 40% faster than AMD 1GHz CPU using the standard GCC compiler." Intel should want every compiler on Earth to use their optimization (so should AMD and others), not only their compiler.
      • by DGolden ( 17848 ) on Saturday January 26, 2002 @10:53AM (#2906244) Homepage Journal
        Becasue they could inadvertently improve gcc's output on non-intel architectures, perhaps? - GCC compiles to an intermediate tree form called "RTL" first, and thus some of intel's higher-level optimisations could end up imporving gcc PPC code, say. And that would be a bad thing for Intel, in the same way as you don't see Microsoft coders adding completion ports to linux to improve linux server I/O.

        (although, a lot of intel's optimizing is probably due to their knowledge of the arcane, baroque, and just plain stupid x86 architecture, and thus would not be applicable to saner CPUs archs like... virtually anything else currently available.)
        • While you're right that some of the optimizations are higher levels, alot of the stuff GCC doesn't currently optimize is instruction pairing, data prefetching, etc...

          That is a lower-level backend issue and won't seriously improve the other backends at all.

          The OP has a point though, by upgrading GCC instead of making their own compiler more people will have access to a compiler that makes code tuned for their processor better.

          For instance, I have a PIII [I don't but lets say I do]. I write code and build it in GCC. I go out and buy a PIV [roughly same clock rate] I notice that my code is not significantly faster. I get pissed off...

          However, I buy the PIV and tell GCC to use PIV specific optimizations, my code turns out faster and I am happy.

          Tom
    • Re:Is it surprising? (Score:3, Interesting)

      by eldrich ( 157931 )
      Funnily enough when we tested the Intel fortran compiler against the Portland group fortran compiler (fortran here means numerically intensive double precission code) the Portland compiler produced the faster code. Here we use bought Portland and use it on the beowulf frontend to compile the actual cluster executables and use g77/gcc on all the development workstations. g77/gcc is not particularly fast but it is available and very very useful. We evaluated the Intel compilers but found more problems in going from intel to portand than g77 to portland. GCC will never die it is far too useful
  • But GCC's free... (Score:2, Interesting)

    by PoiBoy ( 525770 )
    While the article did provide evidence that Intel's compiler produces faster code, it neglected to mention one small difference betwee gcc and Intel cc: gcc is free, while a single-user license from Intel is $499.

    While software firms and organizations developing mission-critical programs may decide to switch to icc, the fact that gcc is free will help it to remain popular among hackers and other budget-constrained users. Moreover, most of the source code programs one downloads for Linux are designed to be compiled with gcc.

  • by entrox ( 266621 ) <slashdot AT entrox DOT org> on Saturday January 26, 2002 @10:06AM (#2906115) Homepage
    I was always under the impression that a vendor supplied compiler would almost always out-perform a generic cross-compiler, which is available on much more other platforms. GCC is all fine and dandy, but it's it shines in other aspects than pure optimization and fast code (they may be faster on some architectures than the vendor supplied compiler, but that's not my point). The produced code x86 was always sub-optimal. Because of that, projects like pgcc [goof.com] exist(ed).
  • It seems that they've been testing performace of
    some code blocks they've called 'kernels' for some
    obscure reasons.

    That makes the test useless - if they've compiled
    some linux kernel by both, say GCC and MSVC and their own compiler - that's where the real results
    are to be derived from. Needless to say that they
    couldn't do that.

    And their (surely optimized) "kernel"s run faster compiled by their own compiler. Bah! No surprise.

    Conclusion : this is unfair comparison, and the results of the test say nothing.
  • by wolruf ( 30926 ) on Saturday January 26, 2002 @10:08AM (#2906120)
    I really hope Mozilla will soon compile with Intel to see how it compares with GCC as we cannot compare yet OS/compiler (Win32 builds uses Visual C++, Unix use GCC most of the time):
    evaluate Intel's C Compiler [mozilla.org]
  • wtf? (Score:5, Interesting)

    by glwtta ( 532858 ) on Saturday January 26, 2002 @10:09AM (#2906123) Homepage
    will stay the hammer that drives a stake through the fibrillating heart of the aging technology behind the GNU C compiler

    Could this be more full of itself? Somehow I have trouble accepting sweeping generalizations about the fate of compiler technology from someone who obviously dropped out of a creative writing program at some third-rate school.

  • by Jacek Poplawski ( 223457 ) on Saturday January 26, 2002 @10:11AM (#2906126)
    Sometime I analize assembler code produced by gcc-2.95.3 and I am dissapointed. Gcc can do so stupid things like inserting instruction inside - instead outside of loop. Probably gcc3 is fixed, but I heard it still produces slow code (not faster than gcc-2.95.3). So which compiler should I use today?
    I know gcc3 is better, becouse it supports more platforms, but what about speed improvements? To have fast inner loop in linux application I must code that loop in assembler. That is a problem for someone, who's creating a computer game.
    • by Shiny Metal S. ( 544229 ) on Saturday January 26, 2002 @10:40AM (#2906213) Homepage
      Sometime I analize assembler code produced by gcc-2.95.3 and I am dissapointed. Gcc can do so stupid things like inserting instruction inside - instead outside of loop.
      Instead of being dissapointed, you should talk about it on GCC mailing lists [gnu.org] or even submit a patch. This is how GCC [gnu.org] evolve. If you are skilled enough (and I suppose you are, if you can read and understand optimized assembly), and if you suffer because of low performence, than you should act instead of just being dissapointed. Remember that GCC [gnu.org] is a free software [gnu.org], you can improve it.
    • Another reason 3.x is better is that it supports C++ better. Template handling is much more efficient, and using directives can be used to allow Parent::do_this(int) and Child::do_this(char) to coexist in Child, instead of Child::do_this(char) making Parent::do_this(int) inaccessable.
  • by Christopher B. Brown ( 1267 ) <cbbrowne@gmail.com> on Saturday January 26, 2002 @10:12AM (#2906128) Homepage
    • It's not free software.

    • The results do not involve

      geometric mean performance on multiple kernels compiled through it reached 47% improvement over GCC.


      The testing didn't involve compiling kernels at all.

      The 47% performance improvements were on a numerically intense benchmark program.

    • This helps users of PPC, Alpha, and StrongARM exactly how?



    The preferences of the article's authors is pretty clear:

    "Nonetheless, the magnitude of the performance differential in numerically intense applications is such that only the most dramatic sort of improvement in the long-awaited version 3 of the GNU C/C++ compiler will stay the hammer that drives a stake through the fibrillating heart of the aging technology behind the GNU C compiler. May it rest in peace."


    These are not the words of objective observers, and such comments strike me as being quite irresponsible.

    • by anonymous loser ( 58627 ) on Saturday January 26, 2002 @11:33AM (#2906357)
      • You should say "open source," because saying it isn't free is ambiguous even in your mind, and downright wrong in an accountant's mind. This isn't the show-stopper you seem to imply it is. People have been using and will continue to use closed-source compilers for many, many years. Take a look at the popularity of VC++, Watcom, Borland, etc. in spite of the free (as in open source and $$$) availablility of GCC on Windows.
      • No, they didn't compile kernels. They compiled (and tested) ON multiple kernels. Don't you feel silly now, contesting so loudly a point you misinterpreted? I happen to do a lot of engineering design and analysis, and work with people who would be quite interested in saving nearly 50% of their computation time when performing analysis, especially when some analyses take as much as 3 days of computation time. This translates directly to a very large and real cost savings for a company.
      • Well, perhaps it will allow the GCC compiler folks a glimps into some of the optimizations Intel managed (by studying the output produced), which will in turn allow the GCC writers to rethink GCC's optimization strategies. Those improvements would hopefully benefit more platforms than just Intel.


      Next time the zealot in you decides to come raging out, take a deep breath and count to 10. Think about how this news might be good for the open-source community before you begin bashing wantonly.

      • * You should say "open source," because saying it isn't free is ambiguous even in your mind, and downright wrong in an accountant's mind. This isn't the show-stopper you seem to imply it is. People have been using and will continue to use closed-source compilers for many, many years. Take a look at the popularity of VC++, Watcom, Borland, etc. in spite of the free (as in open source and $$$) availablility of GCC on Windows.

        You're comparing apples and oranges when you try to argue that since Windows people use closed-source compilers, then Linux people will also. These are completely different groups of people, and I suspect that plenty of people in the Linux community will start using a closed-source compiler when they pry the gcc source from their cold, dead hands.

        * No, they didn't compile kernels. They compiled (and tested) ON multiple kernels. Don't you feel silly now, contesting so loudly a point you misinterpreted?

        Christopher wasn't the misinterpreter, Slashdot was. Did you read the text he quoted? "the new Intel C/C++ compiler's geometric mean performance on multiple kernels compiled through it reached 47% improvement over GCC." You cleverly omitted the bolded text.

        Next time the zealot in you decides to come raging out, take a deep breath and count to 10. Think about how this news might be good for the open-source community before you begin bashing wantonly.

        Chill. Chris wasn't being a zealot, he was simply offering counterarguments to the ridiculous claim that Intel's closed-source, x86-only, C/C++ only (I bet) compiler spells death for GCC.
    • Compaq released Linux versions their C/C++ compiler suite for the Alpha chip a few years ago. A few adopted it for their internal applications. If I recall correctly, I think one of the digital visual effects firms compiled their renderer using it and got a 5% to 10% improvement.

      However, GCC is universal. It runs everthing, targets anything and costs nothing. Nothing in terms of both Beer and Speech.
    • You have to understand benchmarking people. When they say kernels they mean benchmarking kernels. Small contained programs that extract key loops or algrothmns from larger programs.

      They have a suite of keys loops where each is inportant to someone (fft, lloops, matrix mul, 3d geom, etc..) and determined that in general Proton (the internal codename) is much faster than GCC. Of course it is.

      If only it was free. Unfortunatly, Intel built that compiler on several other companies IP and can't release the source.
  • by eXtro ( 258933 ) on Saturday January 26, 2002 @10:12AM (#2906133) Homepage
    It's not terribly suprising that Intel can make a more efficient compiler than gcc. They've got a performance group who's sole job is to understand the architecture and how to best exploit it. I can't read the article (its not coming up presently), but its hard to compete against a free product for most users, especially in the linux community. For some users the increased performance will be very important, especially people who write applications that crunch numbers. I'm not talking about gimp filters or spread sheets, sure, they crunch numbers but most of their CPU time is spent either idle or on user input. Remember, on linux gcc is both free and everywhere, and it does a good enough job for most people. Most people run systems that just use precompiled packages, and these packages are often just target at generic i386 processors. If people don't even bother exploiting features that gcc already has, I don't see how you can predict that enough people are going to switch to another compiler to even measure on the radar.
  • I couldn't find a date on the article. But it says only a "dramatic improvement" in the "long-awaited" GCC 3 will change things...

    In fact GCC3 was out some time back and it seems typically (to me) to perform around 20% more slowly than Intel's thing (compared to 40% for older gcc, as the article says). It's not so bad, imo.

  • by glwtta ( 532858 )
    At this point, it?s also important to note that the Intel compiler on both platforms is a little more pedantic than the default settings for either the GNU or the Microsoft compilers. Marginal error conditions that are dismissed by the other compilers are reported as warning level issues by the Intel compiler.

    I don't get it - why is that important?

    But that's a minor issue, the important part is convinced - I am throwing my gcc out the window and paying half a grand for Intel C/C++... (somebody needs to come up with a damn "roll eyes" smiley)

    • Hmmm. Has anyone else noticed WatcomC++ considering going Open?

      Personally, I think open-source will only have arrived when there's a *choice* of equally good free compilers available, and I can use any of them to compile the linux kernel and netbse userspace together for my Psion. That's where the portability aspect really comes in - it's "theoretically possible" now with GCC, so come on intel, catch up!

      Oh, and the smiley you seek is i ._ ! _. i

      ;8)
    • "...the Intel compiler...is a little more pedantic than the default settings for either the GNU or the Microsoft compilers."

      I don't get it - why is that important?

      Warnings should never be swept under the carpet, they should always be dealt with. They have a habit of biting later on, particularly when switching between architectures.

      One thing I always hate about linking with many third-party libraries is the way they often require dubious casting which generates warnings. I like nice clean code, and I like my compilers to wear jackboots when dishing out warnings.

      Cheers,
      Ian

  • Does this mean? (Score:2, Insightful)

    by soulhuntre ( 52742 )
    That maybe the Open Source model isn't the be all and end all? Remember in the old days, when everyone believed that if you took the money out of programming somehow you magically got software that was faster/better/more innovative and bug free? Not. I think Open Source is doing a good job with what they are working on - but this cult like mentality has to go. of course, the most important thing to most people is really the "free" (beer) part. You mean they will give me stuff for free and feed me a philosophy that lets me pirate everything else I want in the name of "freedom"? WOW! SIGN ME UP! But after the glow fades:
    • GCC isn't the worlds best compiler
    • Linux not only crashes and has bugs, but some of them are caused by ego clashing and political tension... AND there are zealots who will try and cover them up.
    • Mozilla (the shining jewel of Open Source) is years late and many dollars short of beating IE.
    • Loki is dead and so goes the myht that Linux is a market that is large and willing to buy. They listened to the Linux zealots and got screwed.
    • Slashdot is squelching topics and moderators are abusing their power - so there goes the myth of the open minds of the Open Source community. The dream is gone and good riddance.
    In the end, I like and support the Open Source world... I think amazing things have happened... But your much better off when you realize it is simply another dynamic - it is not the best one and it certainly isn't the only one. Fight tyranny and repression.... read /. at -1!
    • Re:Does this mean? (Score:4, Informative)

      by glwtta ( 532858 ) on Saturday January 26, 2002 @10:24AM (#2906161) Homepage
      Since when is Mozilla the shining jewerl of OS? Just curious.

      I'd agree with the gist of what you are saying, but some of your bullet items are just oversimplified, overstated bollocks.

    • Re:Does this mean? (Score:4, Insightful)

      by GypC ( 7592 ) on Saturday January 26, 2002 @10:43AM (#2906221) Homepage Journal

      Awww, did you get your widdle bubble burst?

      • No one ever said anything about taking the money out of programming. I'm sure Red Hat programmers enjoy a decent salary. Much of Free software is hobby work, done for love.
      • Most of the true believers in Free software do not condone piracy at all. That would be your typical Windows user or Slashdot bigmouth you're thinking of.
      • GCC probably is the world's best compiler if you put correctness and platform independence ahead of speed.
      • Any OS crashes, building a stable system is a system administration skill. There are extremely stable combinations available.
      • Mozilla already rocks all over IE, IMNSHO.
      • Most businesses fail in the first couple years, especially in the current recession.
      • Slashdot does not represent any community but its own.

      Your bitterness is unbecoming. Slamming the good work of people in the Linux, Mozilla, and GCC projects because of your sudden realization that all of your juvenile misapprehensions are not 100% correct is a mark of poor character.

    • Re:Does this mean? (Score:5, Insightful)

      by Chanc_Gorkon ( 94133 ) <gorkon&gmail,com> on Saturday January 26, 2002 @11:03AM (#2906276)
      While I will agree with some of the infighting bullcrap,I disagree with your bullets.

      Sure GCC may not be the best compiler on the face of god's green earth, but to me, if I was running a project with tight money constraints, and I had to choose between a $499 and a $0 dollar compiler and the only reason was the expensive one was faster, unless I absolutly had a better reason for paying the $499, I would have to choose gcc. For one, gcc is well known, in common use, and everything works pretty well on it. I am not saying it doesn't have it's caveats, but to me, compile speed means diddly. Personally, I would rather take the slow one because it gives me more time to drink my coffee while waiting on a compile! :) Well, no, I would rather have the fast one, but I am a poor married man with a kid and have no money to wast on a compiler when I can get one for free.

      Yeah Linux crashes....so does Windows, Solaris, FreeBSD, OpenBSD, z/OS, DOS/VSE, BeOS, Windows CE, PocketPC, Palmos..........get what I am getting at? And so far as covering these crashes up, I JUST don't see this happen, at least from the Linux realm. Oh and anyone trying to contradict what I am saying about other OS's crashing needs to be smacked in the head with a massive Clue stick. There has almsot never and probably never will be a uncrashable os. There has almost never been a completely bug free piece of code. Sure, some others in the list are better and crash MUCH less then windows (Linux fer sure as well as about 90 percent of the list), but to say that people are tying to cover Linux crashes up is BS!

      Mozilla not delivering? Where the heck have you been? Mozilla, as of late, is TON's better then it was. And with the earlier post that Mozilla will also support anti-aliased text, well, besides Konqueror, I see noone else that competes and surely not that bug ridden, crash pron piece of filth called Netscape 6 (or 4.78 for that matter....). I know it's been updated since 6.0 came out, but heck it was based on a sub point 1 release of Mozilla and even Mozilla was better then Netscape 6 when 6 came out! Plus there's that AOL/Time Warner FILLED bookmark list that installs with it and ...well, people should just download Mozilla and fergeddabout Netscape. I am not saying this because I am a Open Source Zealot (because I ain't). Mozilla is doing good and who gives a rat's rearend if it was late. I personally don't care if crap is late....I care if it works. Case in point, one could say the rewrite of Enlightenment is late, but I just think that raster and mandrake (if he even works on e anymore) are trying to make E the best GUI they can by not only making it a GUI but a shell as well.

      Loki is dead. Long live Linux. Listen, most people are not like us. They use their computers as tools and not gaming machines. Joe sixpack will ask if it's being made for PS/2 not for the PC. Now that's not to sayt gaming on Linux isn't important. It is, but just because Loki has died doesn't mean their won't ever be games on Linux, commerical or free. There are lots of great free games for Linux. Armegatron, GLTron, Tux Racer, Rocks and Diamonds, Maelstrom and the list goes on. Sure, they might not be a 3d shooter, but then there's Quake for that. Also, the time is ripe for a new gaming shift. MMPORPG and 3d Shooter cookie cutter games can only last so long before something, anything comes to take it's place. Right now is the time for a truly innovative game to come out and steal the show. Oh and Wolfenstien and Doom 4 won't be it...they'll just be another 3d shooter.

      Slashdot is censoring.....well, I doubt it. They aren't censoring. I can post anything I want under each new topic. If your talking about story submittals, well, when you are on the other end getting all of the submissions and a vast number are either duplicate, trolls or worse, well, then you start to develop a finely tuned BS detector that sometimes can be faulty. You can usually filter out most BS but sometimes some falls through and get's posted. Rob, Jeff, Chris and Neal are human you know.
    • Re:Does this mean? (Score:4, Insightful)

      by LinuxParanoid ( 64467 ) on Saturday January 26, 2002 @11:08AM (#2906292) Homepage Journal
      Look, if you want to condemn the "cult-like" mentality, stop perpetuating it. Specifically by avoiding broad unsubstantiated claims taking on the tone of religious fervor.

      Like "GCC isn't the worlds best" (best at what, pray tell? speed? ubiquity? price? portability?)

      Like "everyone believed that if you took the money out of programming somehow you magically got software that was faster/better/more innovative" (like those Open Source guys who said it was OK to make money?)

      Like "give me stuff for free and feed me a philosophy that lets me pirate everything" (conflating piracy with free software, not recognizing the legitimate desires of people to legally have more control of what they get, and legally paying less?)

      Open source is not a panacea. It's a way of licensing technology whose strengths and weaknesses will be more and more recongized over time, but whose pre-eminent virtue of providing greater freedom will offer increasing benefits as software monopolies continue to increase their control and prices so that they can keep their share price going up.

      Open Source also has one other long-term, difficult to refute benefit. The fact that Microsoft can't forever grow the software market and must illegally leverage its way into adjacent communications markets (MSN, VoIP), media markets (Slate, Corbis) and consumer services markets (Expedia) is still mostly being glossed over as premature. But it is not being ignored.

      --LinuxParanoid, who doesn't think these Linux guys are paranoid enough... ;)

    • Re:Does this mean? (Score:5, Informative)

      by wytcld ( 179112 ) on Saturday January 26, 2002 @11:08AM (#2906293) Homepage
      GCC isn't the worlds best compiler
      As a sysadmin who often compiles packages, but doesn't write them, all I care is that ./configure;make;make install produces the desired results. Since I'm always multitasking anyhow over several machines, what do I care if a different compiler would make a 5 minute compile 4 minutes, if the end result - as it is with gcc - is a program that runs and runs well.

      Mozilla (the shining jewel of Open Source)
      BS. Konqueror is better, and KDE and Gnome the shining jewels, after Apache of course. (Sendmail? Bind? Proftpd? PHP? - not jewels perhaps, but great workhorses.)

      Loki ... listened to the Linux zealots and got screwed
      So sad, Linux may never be primary platform for gaming. I could care. And my Toyota will never enter the Indy 500.

      Slashdot ... dream is gone and good riddance.
      If you don't like the moderation, set up your own board and invite in only folks you agree with. /. works for me - what gets modded up is generally what I end up agreeing is most worth reading.

    • by markj02 ( 544487 ) on Saturday January 26, 2002 @11:34AM (#2906361)
      You have a fundamental misunderstanding of what open source is all about. Open source is not about producing the "best" software, it is producing about a variety of software that people can pick and choose from and adapt to their own needs. GNU C may not be the compiler I need, but unlike Intel's or Microsoft's compiler, I can hack GNU C and make it fit my needs.

      Now, as for GNU C and benchmarks, GNU C has never produced the fastest code on any platform. Unless you lived under a rock and never did any high performance computing, you'd know that. And if you took the time to look at the GNU C documentation, you'd also know that this is no accident. But to most GNU C users, this fact never mattered. GNU C generates decent code and it has many other attributes that make it the "best" compiler for many applications.

      You see, there is another misunderstanding that you and Bill Gates share: you think that there is a single "best" solution to everything. In real life, there isn't. What is "best" for you isn't necessarily "best" for me, and there may well be no way to reconcile our conflicting needs in the same piece of software.

      I do agree that Slashdot moderation tends to exclude voices like yours and I think that's wrong. Why? So that one can point out how uninformed and confused you actually are.

  • by StarBar ( 549337 ) on Saturday January 26, 2002 @10:21AM (#2906151) Homepage Journal
    I once worked in a company making compilers for embedded systems, and debuggers. At one point we were outperformed by a 3-400% compared to a competitor. 6 months later we were twice as fast as they were for that specific benchmark. I wonder how many customers actually used that for something useful? The hard part is actually debugging and portability. And for speed I would say that a profiler can make miracles finding the hotspots needing optimizations. Just hand optimize those spots and you are doing fine with your favourite tools.

    Also with optimizations follows compiler bugs (i.e. the compiler generates faulty code) that are very hard to find especially if you don't have the source to your compiler.

    Finally I think Intel just want to capture customers as thet did with their compilers in the early 90:s (ie PLM and Intel C) It's just not in their interest to be portable. With all this in mind such compilers could be good for a specific project but I'd be careful to build anything on highly optimizing compilers in general and not on a sound design.
  • Architecture dedicated compilers may well be faster, but gcc's performance has already reached a 'good enough' stage.

    What's more important is that gcc provides features that are absent from all the other compilers: gcc works virtually on any architecture, and offers a stable (in its functions) platform, and an unique interface to low-level features (such as building calls dynamically) as well as very good extensions. It demonstrates how free software can offer a standard, and not be affraid of 'innovating.'

    so, intel/dell/sun's compilers may have their place, but they don't play in the same category
    as gcc. They're useful for dedicated performance apps, or things like games.
  • Well... (Score:2, Informative)

    by Krapangor ( 533950 )
    ...the gcc isn't the most 31337 compiler out there. I've seen the gcc slowing down programs on a Atlon 800 MHz down to the performance of a 400 Pentium III (II ?) (scientific apps).
    But isn't the main strength of the gcc it's crosscompiling abilities ? Never heard of any compiler supporting so much platforms.
  • If someone needs hyper-optimized code for (intel)x86 only, sure this compiler rocks. It's a big world out there however, and Intel only comprises a small percentage of the microprocessors on this planet. The bottom line is that regardless of performance, gcc is still the most portable compiler anywhere and for a lot of people, this is more important. After all, Microsoft takes advantage of the fact that computers get faster and cheaper and so will gcc. For most situations, you don't need the fastest code because the human interacting with it can't tell the difference. Gcc isn't going away. Visual Studio isn't going away. Chances are, intel's compiler won't be used by anyone outside of very specialized number crunching applications because gcc and VC have long ago reached critical mass on their respective platforms.
  • gcc 3 vs gcc 2 (Score:3, Interesting)

    by glwtta ( 532858 ) on Saturday January 26, 2002 @10:42AM (#2906219) Homepage
    I really haven't been keeping up with 3.0 (mostly because it doesn't work yet :) ) so I am hoping someone can inform my lazy ass.

    What are they targeting with this release? What new big (and important) features are in it? And, in view of the article, can we expect speed increases, or is it mostly about new features?

    In any case, I am not stopping using gcc just because some closed, expensive thing is much faster (even if it is ten times faster), and I expect a lot of people here feel the same way. Ok, I might consider if it was ten times faster :)

    Apart from the whole OS "cult" theres also another reason (and I am sure many will disagree with me here), but there is such a thing as "fast enough" and for the vast majority of things I use my computer for, that has been more than achieved. Don't get me wrong, I love tweaking, optimizing, overclocking and generally pushing the hardware as far as it will go, by any means handily available (including keeping a voodoo doll of my PC in the freezer), but I've found that I do this more for the process than the end result. Buying and installing a new compiler (which you know nothing about, in terms of how it works) just doesn't seem to be all that much fun. (Besides, I am sure my Athlon would never speak to me again)

  • by LinuxParanoid ( 64467 ) on Saturday January 26, 2002 @10:43AM (#2906222) Homepage Journal
    It's not exactly new news that Intel's compilers are better than Microsoft's or GCC, as any astute watcher or compiler of SPECbench results can tell you. GCC has never been a performance barn-burner. People who wanted that paid the money, signed the forms, and tweaked their software to run under Intel's compilers.

    No, what's great news is that Intel's compilers are available now on Linux. So an ISV like Red Hat can compile the OS (or specific math libraries) on them for either real-user or benchmarking benefits.

    "Driving a stake through the heart" of GCC is a gross exaggeration, given the ubiquity, freedom, and free beer nature of GCC. "Giving GCC a kick in the pants" might be more accurate. And a good thing, too.

    --LP
    • by rkit ( 538398 ) on Saturday January 26, 2002 @11:43AM (#2906393) Homepage
      "No, what's great news is that Intel's compilers are available now on Linux."

      I totally agree. Unix has always been popular in scientific computing and egineering, but I know of several people switching to WindowsNT because

      a) intel systems are extremly cheap (compared to architecures optimized for number crunching like RS6K)

      b) compilers available for NT produced MUCH faster code, e.g. Digital fortran. (Yes, I know ... but still a lot of excellent scientific computing software is written in fortran77, e.g. LAPACK)

      When it comes to numerical simulation, run times in the order of weeks are not unusual, so a performance penalty of 50 percent is simply unacceptable.

      So this may turn out to be a big win for linux in the scientific computing area.
  • My own experiences (Score:5, Insightful)

    by neonstz ( 79215 ) on Saturday January 26, 2002 @10:46AM (#2906232) Homepage

    A while ago I tested the Intel compiler on some graphic-stuff I've been coding (using Visual C++). I got between 20-30% performance increase. The compiler was horribly slow though, MSVC was probably 4 times as fast compiling the entire project.

    I'm using GCC 3.0.x for Gameboy Advance development (ARM7TDI cpu). It works fine for me, but the vendor compiler generates between 30 and 40% faster (and smaller code) (or something like that, don't have the exact numbers right now). But as many others have pointed out, GCC is free, other compilers are not.

    GCC is excellent for multi-platform development and cross-compiling. Using the same compiler for Windows, Linux, *BSD, Irix, Solaris and Gameboy Advance is a huge advantage.

    Speed (on the generated code) isn't always the issue. At work we always compile and run with full debug information and no optimization (except for tiny, speed-critical parts and very very thouroughly tested libraries). The code is used in weapon systems (we ship the entire system, including the hardware). Coredumps are very nice if you want to find out why something crashed :)

  • Sure but... (Score:2, Interesting)

    by tomstdenis ( 446163 )
    Sure Intels compiler might be super good at optimizing, but are they forgetting any 12 yr old trying to learn C can pick up GCC for **free**?

    Tom
  • Here we go again (Score:4, Insightful)

    by Anonymous Coward on Saturday January 26, 2002 @10:52AM (#2906242)
    The open source crowd still doesn't understand that people outside "the community" are willing to pay for better software, whether that payment is in money or an acceptance or lowered freedom (as in speech). Of course Intel's compilers are better than GCC--they've got top-notch professionals working on them full time, and their corporate image is on the line because their name is on the product.

    GCC will still be in very wide use, since it comes with Linux and it does a quite decent job. But anyone who really cares about performance will seek out the better alternatives, like Intel's compilers. This is not news, and it's particularly not bad news--we all want freedom of choice, and the more genuine alternatives everyone has, the better, right?

  • by cluge ( 114877 ) on Saturday January 26, 2002 @10:55AM (#2906248) Homepage
    Compaq's (formerly DEC) C compiler for alpha's have always been excellent and far ahead of GCC. The problem is that a lot of compilers (we haven't tested the intel yet) won't compile all the code that you may want or need. In the *NIX env. GCC seems to provide the highest level of compatibility over a wide variety of platforms n(SUN, AXI, BSD, Linux, Tru-64, Windows, et al).

    Until there is only one chip left to support (Intel is fast working on it, with the support of turncoats Compaq, HP and others) GCC will be a viable option. GCC is a great "cross platform" compiler that works for much of the current written open soruce code base. You can get that compiler to work for many different OSs and archs.

    In the end, remember apache wasn't the fastest web server, but it was the "most correct" and it was free! It really doesn't matter how well your C compiler works if it won't compile your code or run on your system.
  • Interesting results (Score:3, Interesting)

    by Salamander ( 33735 ) <jeff@@@pl...atyp...us> on Saturday January 26, 2002 @10:58AM (#2906260) Homepage Journal

    I'm surprised that nobody has commented on what might be the most interesting result on these tests - that the same code produced by the same compiler runs 10% faster on Windows XP than on Linux (2.4.10, according to SuSE's description of 7.3). Sure, the "kernels"[1] used by the benchmark might not be as representative of real life as we'd like, but this should still be cause for concern. Kernel developers have flamed each other endlessly over smaller differences on less comprehensive benchmarks between the Arcangeli and van Riel VM systems. Do we have to go through a Mindcraft-like period of denial before anyone starts taking such a result seriously?

    [1]The objections about the "kernels" used in the benchmarks not being the same as the "kernel" with which we're all familiar only demonstrate the ignorance of people who don't know that the scientific programming and benchmark communities have been using the term just as long as the OS community. Their usage may be different, but it's just as valid.

    • 2.4.10? wasn't that one a mess?

      Not trying to come up with excuses for Linux just yet, it's just the first thing to jump out at me.

      All in all, they have very little information about the actual benchmarks in that article. But I expect we'll see more people doing these "head to head" comparisons now that both OSes can use the same compiler. Kyle? Tom? up to you guys :)

      • 2.4.10? wasn't that one a mess?

        IIRC, 2.4.10 was the first 2.4 kernel with the AA VM. Yes, it was a mess, but I don't think advances since then explain a 10% performance difference for this type of benchmark.

        I also don't know for sure that OBL used 2.4.10. They said they used SuSE 7.3, and SuSE's page for 7.3 says it uses 2.4.10, but that doesn't mean the benchmark used that version. Overall, there do seem to be a few things about the benchmark that give legitimate cause for suspicion. For example, I couldn't find the actual benchmark programs, or even a description, and why the hell were they testing on an HP Omnibook (700MHz P3) instead of a more realistic desktop system, and why did they build the code for a WinXP system on a Win2K system? Very odd.

  • by Zo0ok ( 209803 ) on Saturday January 26, 2002 @11:03AM (#2906278) Homepage
    The code tested computed some kind of geometric mean... It is not surprising att all that performance can be improved significantly by optimising against parallell instructions (such as SSE/SSE2). There is no guarantee that any major improvements will be seen on an "ordinary application".

    However, I will find use for this information and I will try Intels compiler and compare it to GCC.

    Very minor changes in the code of this kind of high-performance applications can result in very big speed-ups, with any compiler. It would be interesting to see some real world problem (some PDE-model or something) based on for example BLAS (Basic Linear Algebra Solver, or something, www.netlib.org), being computed with gcc/icc and see the "real" difference.
  • It is well known that the Pentium 4 has a seriously underforming FPU [slashdot.org] when dealing with standard floating point operations instead of the P4 specific SSE2 operaions.

    It is quite possible that a similar improvement could be achieved by GCC in floating-point intenisve code simply by supporting SSE2.

  • by rseuhs ( 322520 ) on Saturday January 26, 2002 @11:04AM (#2906280)
    I know it's highly controversial to say this, but IMO, GNU is no longer the driving force behind free software. While GNU created great things in the past (emacs, shell-utilities and above all gcc of course) lately development seems to have stagnated a bit.

    It seems to me that GNU and the FSF has become a bunch of bureaucrats and politicians who forgot what free software is all about.

    Today, the real dynamic and successful projects are mostly non-GNU: KDE, Apache, Linux, Wine, etc.

    Today, GNOME is the only GNU-project that can be called a bit dynamic, and I think this is because of a lot of 3rd party involvement via the GNOME-foundation and the fact that RMS is not the final authority in the GNOME-project.

    What breakthroughs has there been in RMS-led projects in the last - say - 5 years? I can't think of any.

    Of course, gcc is still the best open-source compiler we have, and no alternative is in sight (unless Intel open-sources theirs which is highly unlikely), but I see it as a weak spot in the free software-world. How long have we been waiting for a decsent c++ compiler? Maybe I'm paranoid, but maybe RMS is not very enthusiastic about C++ support because GNOME would look even worse in comparison to KDE, once a good C++ compiler is available?

    I think we need a lot more non-GNU involvment for gcc (gcc-foundation?) to get some fresh blood into this project. And if RMS doesn't allow that, we need a fork.

    But of course, that's just my opinion, so flame me.

    • by Ray Dassen ( 3291 ) on Saturday January 26, 2002 @11:44AM (#2906396) Homepage
      What breakthroughs has there been in RMS-led projects in the last - say - 5 years? I can't think of any.

      So? The GNU project does not have a mission statement that includes "produce major breakthrough every couple of years". The FSF's top level page has a couple of links that are essential when trying to evaluate its success: why we exist [fsf.org] (as relevant as ever), what we provide [fsf.org] and where we are going [fsf.org].

      But of course, that's just my opinion, so flame me.

      I rarely flame people for their opinions. I occasionally flame people who clearly haven't bothered to try to understand what they're talking about and who don't let facts get in the way of their opinions. You seem to fit that category nicely. In particular, your comment "I think we need a lot more non-GNU involvment for gcc (gcc-foundation?) to get some fresh blood into this project. And if RMS doesn't allow that, we need a fork." shows you to have little understanding of gcc's development process. Gcc's development process was broken open in 1999 (by the FSF effectively admitting the failure of its cathedral-style development model of gcc 2.8.x and embracing the bazaar-style development model of the EGCS fork) and has an effective foundation (in the form of the GCC steering committee [gnu.org]), as anyone who has read the GCC FAQ [gnu.org] or is familiar with gcc's history knows.

  • you diss GCC you get slashdotted - just the way it goes ;)

  • by garoush ( 111257 ) on Saturday January 26, 2002 @11:09AM (#2906295) Homepage
    How can GCC die when Intel can't come close the impressive list of Supported Platforms [gnu.org] by GCC?
  • by lkaos ( 187507 ) <anthony@nospAm.codemonkey.ws> on Saturday January 26, 2002 @11:09AM (#2906297) Homepage Journal
    In essence, the Intel compiler attempts to maximally exploit the 128-bit Streaming Single-Instruction-Multiple-Data (SIMD) extensions on packed integers and floating point numbers, which enable fine-grained code parallelization, found in the Pentium III and Pentium 4 CPUs.

    So the compiler produce code that only is optimized for Pentium III & Pentium IV CPUs. So it is not a production quality compiler because it can only produce code for specific processors of an architecture family.

    I really don't see what the big deal is. If I wrote a program in assembly to take advantage of these extensions on a Pentium III does that mean I can get a story /.'d since assembly is now going to overtake C?
    • This is a good thing.

      Think about it. The technology is there, so why not use it? It's the same thing I tell my programming teacher when she gets pissed about my using strrev(char *string); instead of writing my own string reverse function in C++.

      Hell, a lot of shitty compilers aren't even optimized for MMX yet, and that set of instructions DOES exist in every processor out there now.

      I'd assume that if you're using the Intel compiler on an AMD chip, it's not going to try to optimize for P3 and P4 instruction sets. So why not use an AMD compiler for compiling to 3dNow! and the like?

      Intel isn't obligated to produce a compiler that is faster for *every* architecture in existance. They're only obligated to make a straight compiler that works with it all...if it happens to like its own processors better than others, thats called the left hand working well with the right. It would be pretty hard for Intel to make their compiler work as well with 3dNow! as it does with SSE and SSE2.
  • The intel compiler and it's library quality has to be able to handle ANSI C++ at least as good as GCC 3 or it's not much of an option for me. Is this not an issue for anyone else? Another factor is compilation speed. You wouldn't want to dump gcc for development builds if icc is signicantly slower as I hear it is.
  • I'm always looking for more data on the relative performance of systems. For almost all of us, the quality of compiler-generated code is an inseparable part of system performance. If you have a processor that looks really fast but no compiler can produce good code for it, you'll have better performance if you use a "slower" machine that compilers do support well.

    The best performance measure is running your code on a variety of systems. Because most people can't do that, it may make sense to look for standard benchmarks that look like your code, and then make analogies based on the similarities of those loads to what you want to do. It's critical to pick the right benchmarks to have a good analogy; if you're interested in 3d performance, it doesn't make sense to make performance comparisons based on the number of rc5 keys per second.

    Unfortunately, the Open Magazine article doesn't give any information on what exactly their tests are doing. So it's not possible for you to figure out which, if any, of their tests will be analogous to your code. :-(

    As I've mentioned before [slashdot.org], I'm mostly interested in integer performance. From what I've read about the Intel C compiler, its strength is floating point. If I did a lot of FP work, I'd be sending Intel a credit card number about now, and I imagine many FP people will.

    But for integer work, I think it's not so clear. Andreas Jaeger has a nice page benchmarking versions of GCC [www.suse.de]. On Athlon processors, SPEC CPU2000 [spec.org] CINT2000 [spec.org] base looks like it's around 10% faster when built with the Intel C compiler than with GCC 3.0.1. I think I can live with that.

    It's a lot easier to modify gcc than icc too, and yes, I really do hack on gcc from time to time.

  • Are we surprise that Intel's compiler is better than GCC? We better not be -- they own the CPU after all.

    However, *I* am surprise that Intel's compiler *is* in fact better than Microsoft's C++ compiler (which is if you used it). After all MS has a whole army of codders working on their compiler (not to mention for 15+ years). MS should know all too well by now how to write an optimized compiler -- and bug free, support for the latest standard, and on and on.

    Now *THIS* should be the news of the day not Intel vs. GCC.
  • Everyone has been quoting parts of this sentence but it deserves to be quoted in full.

    "Nonetheless, the magnitude of the performance differential in numerically intense applications is such that only the most dramatic sort of improvement in the long-awaited version 3 of the GNU C/C++ compiler will stay the hammer that drives a stake through the fibrillating heart of the aging technology behind the GNU C compiler."

    If I read that correctly, it means that they did all the tests with gcc2. I think gcc3 version has not been optomized as much so it would probably be worse, but it would still be interesting to see how they compare.

  • Warning (Score:3, Interesting)

    by Lars T. ( 470328 ) <Lars.TraegerNO@SPAMgooglemail.com> on Saturday January 26, 2002 @11:23AM (#2906330) Journal
    Heise Newsticker reports (in German) [heise.de], that the compiler switch -ipo (inter procedural optimization) with the Intel C++ compiler can seriously mess up the compiled program. An example given (with image) is Povray under both Windows and Linux, which can tint some images.

    What good is a fast running kernel, when it has more bugs than something from Microsoft?

  • by VersedM ( 323660 ) on Saturday January 26, 2002 @11:28AM (#2906343)
    It seems like it would be a nice move for AMD to support GCC optimizations for Athlon processors. The idea would be similar to IBM supporting Linux as a way of chipping away at market dominance by Microsoft.

    AMD should supply GPL'd contributions to GCC that optimize code for its Athlon processors. This would give them a relatively cheap way of putting out a competing compiler to Intel's proprietary version since it would leverage all the work that has already been done by the GCC group. It could also make them the preferred chip for open source OS's by ensuring that Athlons run GCC code faster than any other processor. This would be strategically very valuable at a time that they are about to push their new 64 bit instructions while Linux is simultaneously becoming viable/validated as an enterprise platform. Since GCC is not limited to Linux, these performance enhancements would also translate into gains for non-open source development projects as well.

    All in all, it seems that this could be a be a great way for AMD to give developers a way to produce AMD optimized code while at the same time encouraging the use of their new 64 bit instructions in the booming open source OS server/workstation market.
  • ...why is Intel's C/C++ even an also-ran?

    OK, it's interesting as an experiment in Intel-specific compilers. As a baseline when testing out any new x86-compatable processors, it's probably a critical tool. But otherwise?

    Seriously: Why even bother?

    As for proclaiming GCC dead...please. Speed benchmarks and compilers are notorious PR pieces. I can't think of a better example of pure sensationalism. Disagree? Prove me wrong.

  • I've read once that most of code optimizations couldn't be implemented in GCC because they were patented by Intel or IBM or , does anyone as info about this ?
  • by markj02 ( 544487 ) on Saturday January 26, 2002 @11:46AM (#2906403)
    GNU C has never generated the best code for any platform--it purposely traded off retargetability for ultimate performance. Sun's compilers easily beat GNU C on SPARC, and even on the 68k there were proprietary compilers that generated better code. Anybody who has done any high performance computing with GNU C should know that, or they should perhaps start working in a different field. What GNU C does offer is decent performance, consistency across platforms, multiple integrated language front-ends, and some very useful extensions and features. Those advantages usually far outweigh a moderate performance gain.

    I do a lot of high performance computing with GNU C. It doesn't matter to me how fast the Pentium works with some oddball proprietary compiler--the performance I get with GNU C is the performance an Intel-based machine has for my purposes. If that's less than optimal, that just makes Intel's platform less attractive. If Intel wants to do something about that, they should invest in improving the GNU C backend.

  • by alexhmit01 ( 104757 ) on Saturday January 26, 2002 @12:57PM (#2906642)
    Wow, everyone that I saw posting apparently believes that their home computer is the end all and be all of computing. You're an idiot.

    Look, we only use a handful of Linux machines, so we aren't likely to use this. However, if I was rolling out 1000 workstations in my enterprise, and we were tweaking/tuning the OS before rolling it out, recompiling with this would work.

    Assuming Red Hat makes compiling under the Intel compiler a requirement for inclusion in their distribution, they're in a great situation.

    Why not compile everything with an optimized compiler? You still have the freely redistributable GCC for compiling open source code, but for stuff that is being downloaded in binary format, wouldn't you want it to run faster?

    Does it compile quicker? Who cares. When you are doing software development, you want something that compiles quickly. When you are rolling out a production environment, free speed is good.

    Look, your precious GCC is terrific, it is a flexible, cross-platform compiler. It's always been week on the performance. The GCC team has always made it clear that the biggest problem ISN'T processor-specific tweaks, its general compiler improvements that are patented.

    GCC is a baseline, things should compile with it. Things should also compile to the POSIX standard. That doesn't mean you don't add tweaks on the platforms that you support and set it up so that ./configure figures out which to use.

    Give me a break. I realize that many of you just use Linux to configure and tweak Linux to the point that you can post on Slashdot about how you can do anything with Linux. However, those of us that have included it as one of our tools to solve problems can use ANY tools that are made available to us.

    If I can get a 47% performance improvement by recompiling some of my applications, terrific. Replacing the server may be cheap in terms of hardware (a few grand for a new server every 6-12 months isn't bad, its one of the few reasons to use x86 servers), but it takes time. Building and testing new hardware is easily 2-3 man-weeks before TESTING (expensive, look at your salaries and double it to estimate costs to the company), recompiling on your test machine and testing is just the testing time.

    Alex
  • by GnrcMan ( 53534 ) on Saturday January 26, 2002 @01:05PM (#2906684) Homepage
    One of the biggest hurdles in getting GCC's optimization up to snuff with closed source compilers are patents. Optimization is a patent minefield. Compaq's got 'em for specific optimization techniques they use in their Alpha compiler, Intel's got 'em for their compiler, Microsoft has them as well. Kinda skews the playing field.
  • by acidblood ( 247709 ) <decio&decpp,net> on Saturday January 26, 2002 @01:42PM (#2906858) Homepage
    here [intel.com]. Only for non-commercial use. At least the cost factor will no longer be a problem for some users.
  • by jelle ( 14827 ) on Saturday January 26, 2002 @02:20PM (#2907010) Homepage
    So I downloaded the Linux_cpu.zip

    It contains a shared and a static library, and two binaries. Full of symbols, so I stip them

    61124 bytes in libcxa.a
    49356 bytes in libcxa.so.1
    90380 bytes oblcpu_gcc
    131736 bytes oblcpu_icc

    $ ldd oblcpu_*
    oblcpu_gcc:
    libm.so.6 => /lib/libm.so.6 (0x4002b000)
    libc.so.6 => /lib/libc.so.6 (0x4004d000)
    /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
    oblcpu_icc:
    libm.so.6 => /lib/libm.so.6 (0x4002b000)
    libcxa.so.1 => not found
    libc.so.6 => /lib/libc.so.6 (0x4004d000)
    /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

    Ok, so the icc version needs the shared library to be loaded as well.

    $ size libcxa.so.1 oblcpu_*
    text data bss dec hex filename
    22839 3008 124 25971 6573 libcxa.so.1
    70563 15860 1923912 2010335 1eacdf oblcpu_gcc
    93858 24236 1923768 2041862 1f2806 oblcpu_icc

    Codesize for gcc: 70563 bytes
    Codesize for icc: 93858 + 22839 = 116697

    Hmm, that is a 65% increase in code size! Not to mention the increase in data size (can anybody say 'lookup table' or 'buffering'?)

    Hmm... I wonder if they tried gcc optimizations such as '-funroll-all-loops'. Too bad they didn't provide the source so we could verify the results.

    I got a free evaluation CD from Intel with the February issue of "Linux Magazine", so I'll be doing my own comparisons thank you.
  • by Chang ( 2714 ) on Saturday January 26, 2002 @02:39PM (#2907085)
    The gain is obviously highly dependant on the application.

    I tested the Intel compiler against GCC using Robert Hyatt's excellent crafty chess engine and the speedup was only 7%. (Athlon 1.2Ghz)

    On a PIII-500Mhz the speedup was only 2.5%

    Of course for other application results with vary, but I for me the Intel compiler isn't worth the money or the effort.

    Hats off to the GCC team for building one of the greatest tools of all time. You can't beat GCC for sheer usefulness and ubiquity.
  • by sasami ( 158671 ) on Saturday January 26, 2002 @03:32PM (#2907321)
    The article doesn't bother to mention what compiler flags were used to optimize the benchmarks.

    The Intel compiler does not generate precise floating-point code by default!

    From the compiler documentation:
    Option: -mp
    Description: Favors conformance to the ANSI C and IEEE 754 standards for floating-point arithmetic.
    Default: OFF
    Looks like we can't even have IEEE compliance, we can only favor it. More gory details can be found in the manual [intel.com] (warning, big PDF...), but the "optimizations" that shocked me most were:

    Division may be replaced with multiplication by the reciprocal
    The long double type is identical to normal double
    "Truncating" from float to integer is actually round-to-nearest!
    These are all defaults. Trading precision for speed can be a lifesaver sometimes, but not in numerical analysis!

    --
    I like canned peaches.
  • by RallyDriver ( 49641 ) on Sunday January 27, 2002 @03:00AM (#2908855) Homepage
    Everyone seems to have their heads so immersed in "computers == x86" they can't see the obvious:

    A point I'm suprised no-one has made yet - GCC is a great compiler, and it's optimiser kicks ass, on sensible (read: orthogonal etc.) CPU architectures (Sparc, PA-RISC) and even semi sensible ones (Motorola 68k).

    HP compiles the HP-UX kernel for PA-RISC with gcc, and not their own compiler, because it produces the tightest code there is for their platform.

    The 80386 is definitely non-sensible; an ungodly mess nothing short of Byzantine - 16 different registers, with no two with instruction sets alike. 80 bit data formats. 8 and 16-bit legacy modes. It shares the unqiue distinction of being even uglier than VAX. Intel would have scrapped the whole steaming turd many moons ago instead of reinventing the 1970's and microcode, were it not for the Wintel monopoly fuelling the fire for faster 80x86 compatibles.

    This has chicken-and-egged its way into the open soruce world - the ultimate reason I'm running Linux on P3 and not a Sparc, PA-RISC, 88100, MIPS, RS6k or whatever is because of Microsoft; yes really - the Wintel (or DOStel) hegemony made x86 the best bang for buck architecture through economies of mass production, even though it ***sucks***, which is why Linus Torvalds had one as an impecunious student in 1986.

    Now, I'm trapped in the Wintel sheep model on a smaller scale - I have a P3 for the same reason that most people have Windows; I'd have a Sparc or Merced based Linux box in a heartbeat, but all the Linux software I like to use comes ready-rolled for x86, and no I don't enjoy typing "make" 15 times just to install intant messaging.

    It's hardly surprising that Intel's code optimiser does better on their archtecture (including 3rd party implementations thereof). It's very goofy to try to optimise for x86. I think you'll find the Intel / GCC gap to be a lot smaller on Merced (IA-64), which is a more sensible setup.

It is easier to change the specification to fit the program than vice versa.

Working...