Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

GCC 2.95 Released 206

sparky writes "The GNU Compiler Collection (GCC) 2.95 is available ."> here [Ed: Please use a mirror]. This is the first release of GCC since the EGCS steering committee took over, and a major step on the way to GCC 3.0. " Interesting to see we have a new ANC (acronym name change).
This discussion has been archived. No new comments can be posted.

GCC 2.95 Released

Comments Filter:
  • If I understand the situation correctly, pgcc is basically a set of patches against egcs/gcc. A lot of the code from pgcc should make its way into gcc eventually, although I believe that some parts can't be included, such as those written by Intel. That's because the copyright for changes to GCC must be assigned to the FSF, and Intel seems unwilling to do so.
  • by Anonymous Coward
    "Oh, your 2.2 kernel crashed. Did you compile it with egcs?"

    That is a common question on linux-kernel. Yes, the code was cleaned up in this respect. But nevertheless, noone guarantees you that an egcs compiled kernel will work. I still use gcc 2.7.2.3 on production boxes for kernel compiles.

    The difference between egcs and gcc 2.95 is that Mark's aliasing framework is turned on by default. There have been some heavy debates about this on the kernel mailing list and apparently the communication between the egcs team and Linus was not very successful :-/
  • I stand corrected then; than you for the information.
  • Would there be any issues with Stampede Linux (or RedHat, etc) building the binaries in their distro with Code Fusion?

    'Course, they couldn't distribute Code Fusion itself, but if it is fully compatible with gcc-2.95, that wouldn't seem to be much of a problem. Just that binaries the user compiles themself won't be as fast.

    -Tom
  • That's pretty harsh.
    On the other hand, it IS true that there is a learning curve to STL, and once you get to the top of that curve, it becomes quite a bit easier to use STL than to make your own versions of everything, because it is really likely that the STL list class, for instance, is more flexible, more robust, and more efficient than the list that you would write in 30 minutes or an hour. While it probably won't be as intuitive for you at first, a little perserverence is all it takes. For a good STL starting point, check out SGI's documentation [sgi.com]. It's pretty good.
    To keep this at least somewhat relevant, I think that despite all of the language-theory arguments against C++, STL and the extremely wide support of C++ practically everywhere make it really useful to know. Is it the best language possible? No. Is it the best language right now? Depends on who you are and what you want to do.

    <ASIDE>
    Someone in this thread mentioned hash maps losing data- sounds very much like a comparator problem. Whoever had that problem should make SURE that the hash map has a working comparator. In my experience, 95% of hash map problems turn out to be a broken comparator.
    To see if that's the case, you can check to see what's in your hash map with a simple snip of code. Assuming you've typedef'ed your hash map's type as "hash_type":

    hash_type::iterator i = hash.begin();
    while (i != hash.end())
    {
    cout << "Key: " << *i.first() << endl;
    cout << "Value: " << *i.second() << endl;
    i++;
    }

    </ASIDE>
  • I agree that the utility of both Swing and STL are hindered by their sometimes awkward syntax. Those two pieces of technology in particular really make me tend to think that the "language of the future" will use first-class functions. After all, Swing's anonymous interface-implementors are just make-believe first-class functions, and the same is true of the function-objects that you have to use in STL. It seems that both of those areas highlight the fact that if you want to have a useful collection of generalized data, you sometimes need to be able to plug in an arbitrary algorithm just as much as you need to be able to plug in an arbitrary datum. From there, first-class functions are a short step. They'd go a long way towards cleaning up the mess that STL (and Swing too, to a lesser extent) can sometimes be.

    On the other hand- STL is STILL easier than the alternative, writing your own hash table for every new data type you need to keep in a hash and every possible key you could store that data with. Digging through levels of templates is bad, but you probably don't have to if you want to debug an STL class: in all of my experience with STL, I have never found a serious bug that didn't ultimately turn out to be my own fault. For example, I have certainly had my fair share of data-losing hash maps, and I've learned that you should always look extremely closely at your comparator before you start scouring the STL code and cursing about the library programmers and how they somehow released a hash table that randomly loses data (though I freely admit to jumping directly to step 2 myself- it makes it all the more humiliating later when somebody points out that I'm using '=' instead of '==' or something =]).
  • -mcpu=k6 and -march=k6 are new in gcc 2.95
  • by Anonymous Coward on Saturday July 31, 1999 @02:46AM (#1773129)
    There have been some heavy debates about this on the kernel mailing list and apparently the communication between the egcs team and Linus was not very successful :-/

    Linus is a great guy and all that. But he has some blind spots. Among them are Quality and Engineering issues. He just does not have the background or experience to make good judgements in these areas. He lacks a humility which would tell him that he is not an expert in all areas of computer science. What we are starting to see is that his lack of expertise in some areas has hindered Linux's ability to scale well across architectural and performance bounderies. In other words, the kernel is becoming more and more complex and Linus is unfamiliar with the technologies which could address these issues. And worst of all, he appears to be unwilling to learn.

  • by Jonas Öberg ( 19456 ) <jonas@gnu.org> on Saturday July 31, 1999 @02:52AM (#1773130) Homepage
    I'd recommend browsing Slashdot with Lynx and using Emacs or some other
    such editor to edit text inputs. It's all very sweet because using an
    external editor makes it so much easier to run ispell.
  • by Anonymous Coward
    For most projects, switching to C++ would probably be less of a benefit than writing the *C* code correctly. :-/ Programming in an object-oriented language is sometimes useful..many benefits (including interfaces) of object-orientation, however, can be had in a non-OO language with little or no trouble. Only inheritance is hard to emulate.

    The other thing about C++ is the STL. Don't even make me talk about the STL, I just tried to use it in a program and I'm now trying to remove all references to it..it's baroque, buggy, and poorly defined. For starters. (among other things -- my hash tables keep 'forgetting' about entries. I insert an entry with key "foo" into the table. Shortly afterward, I try to extract the entry from the table. Surprise! It's not there! Argh... Oh, and the default hash function knows about char *s, but the default hash EQUALITY routine just compares addresses!!)

    Daniel
  • by ajs ( 35943 ) <ajsNO@SPAMajs.com> on Saturday July 31, 1999 @03:55AM (#1773136) Homepage Journal
    It's great to see the full span of history, here. The previous article was Ritchie releaseing code for the first C compilers and this one is GCC 2.95 which is the culmination of at least 15 years of work on the part of Stallman, FSF, Cygnus and probably tens of thousands of volunteers. For those who are looking to advocate Open Source within their companies, or just see for yourself what this "new" paradigm can produce, please read the GCC source! It's some of the best code I've ever seen (though I admit it has it's... blemishes). The way it achives platform neutrality while producing highly platform-optimizied code is genius, and you can really feel its roots (the intermediate language is a compromise between Stallman's love for LISP and a desire to keep compilation fast and efficient). This is all without going into the fact that there are now front-ends for every major programming language with the exception of the purely interpreted ones (Perl, Python, TCL, etc).

    Currently, as far as I am aware, this is the finest compiler on the face of the Earth, and I dare someone to prove it wrong. There are compilers that produce faster code for a specific platform, and there are compilers for languages that GCC has never heard of, but GCC wins the all-around best technology (IMHO) for producing native instructions from your pet language. It's the one tool which I feel justifies all of Stallman's quirkiness, and gives me faith in the future of Open Source.
  • Shouldn't it be trivial (with ispell's "-a" mode) to write a web-based interface to integrate with things like Slash?
  • Are you saying that you think Microsoft has the same level of respect and admiration among their customers that they had in, say, 1993? They've been in a decline since about 1995--I maintain that they peaked at about the release of Win95, and have been on a downward trajectory ever since. Oh, it was slow at first, manifested in small ways, but it's accelerating every day. No, it hasn't been reflected in their stock prices or earnings yet, but that's only a matter of time. You can't keep customers forever when everyone thinks you are scum.
  • by Scurrilous Knave ( 66691 ) on Saturday July 31, 1999 @03:58PM (#1773142) Homepage
    Ada95 supports OO programming, but unlike, say, Smalltalk, it doesn't restrict you to that model. You are correct that an Ada kernel would be easier to maintain, and yes it would be more portable--at least in my experience, Ada is more portable not only between platforms, but between compilers, than C, and even moreso than C++. The "not as efficient" is a myth. One can write inefficient code in any language. Real efficiency comes from choosing your algorithms wisely--a warrior thinks of tactics, a good warrior of strategy, but a great warrior thinks of logistics. And besides all that, since an Ada compiler has more information than, say, a C compiler, about what the programmer actually wants, it can make better decisions and actually generate more efficient code, especially for today's pipelined multiple-functional-unit CPUs. Not that GNU Ada necessarily does better than GNU C, but in practice the efficiency question is a red herring.

    I applaud the fact that you didn't just blindly flame what you do not know. I would suggest that you learn a bit about it--who knows, you might like it!

  • If anyone has gotten it to build on Linuxppc could you e-mail me and let me know if you had any problems, and how to get around them.
  • C++ is actually a very nice language if you avoid the standard pit-traps (which are many).

    Windows NT is a very nice OS too, if you avoid the standard pit-traps (which are many).

  • Anyone know how backward compatible gcc 3.0 is gonna be with say, libc5 and 2.0.x? I have systems that haven't been upgraded to glibc2/2.2.x/egcs etc yet... or is 2.7.2.3 still going to be the recommended version for non-cutting edge machines? I'm sure there's a lot more boxes out there than just mine that have gcc 2.7 for 2.0.x and similiar compatibility, after all.

    David
  • Is there any indication from any Important People(tm) when this bug in the kernel will be patched?
  • How is pgcc "better" than gcc when it won't even compile PPC code?
  • If you're up this late on Friday night reading slashdot, you needn't use a mirror. I just downloaded the whole thing at about 300k/s straight from ftp.gnu.org. :)
  • Hit the egcs mailing list archives off of the
    egcs homepage. There is a long, long collection
    of threads about this problem, dealing especially
    with the Linux kernel. Linus Himself[tm] gets
    heavily involved.

    Parts of it turn into a flamefest, but there
    /is/ a good deal of information down in there as
    well.

    In any case, it's not a showstopping bug; you
    just have to use a flag.
  • + This is probably because of the lack of
    + precompiled headers. Much of the source
    + includes the headers to libs and system stuff.

    Most probably. Without optimization, by far the
    bulk of a compiler's time is spent parsing.

    Without meaning to sound condescending, I must
    at this point mention things like the redundant
    include guards described in the Lakos text. I
    found that the little extra typing really can
    speed up the compliation stage significantly.


    + 4 Meg executable swells to a mindboggling
    + 130MB of god-knows-what nonsense when the
    + -g flag is on.

    Speaking only for myself, not the EGCS people:
    why do you care? It's the debugging version,
    not the production version.

    I realize that's harsh, but for the most part
    my binary sizes don't get much attention from
    me until after I turn off -g, enable a string
    of space-saving optimizations, and then strip
    the binary.


    + the current situation sucks for those of us
    + with beefy apps.

    My last major project was beefy enough, and
    egcs pulled through with flying colors for me.
    If you have suggestions, I'm sure the EGCS
    maintainers would welcome your patch...

    Luck++;
    Phil
  • I used to be anti-STL until I was forced into
    using it. Now I believe it is totally optimal for
    what we are doing (manipulating many large data
    sets generated from a numerical engine).

    One reason I was against it initially is that it
    requires a very standards compliant C++ to work
    properly. Honestly I've been using the M$
    compilier which is pretty compliant, but is
    missing partial specialization which would make
    iterator_traits work correctly.

    Also I have found that use of STL's generic
    programming paradigm, if bought into permeates
    your code beyond the use of collections. For
    instance I find myself doing this all the time

    class foo{
    template
    void bar(It begin, It end);
    };

    Unfortunately MS forces you to define bar inline
    in this situation, but it works...

    Generally STL == good

    Negatives:
    compilies very slow
    Bloats object code.

    Anyway I'm intested in getting my hands on the
    latest GCC to start attempting to compile some
    of the STL code I have been working on.

  • by Anonymous Coward
    C++ is not OO, it's multi-paradigm.
    It means that it supports the combined use of classic low-level structured programming (C), object-oriented programming (Simula), and template programming / stronger typing (Ada).

    SmallTalk is 100% pure Object. Java is around 75%.

    Now, my opinion: C++ is overcomplicated, mostly because it is too low level and high level at the same time.
  • C++ is useless trash. People who know better avoid it like they do the plague.

    Not fair or true. There are plenty of people who "know better" and don't avoid the language. C++ is actually a very nice language if you avoid the standard pit-traps (which are many).

    For example, I recommend avoiding multiple inheiritance (the only thing that I think Java got right), references, templates, run-time type checking, public data members, unplanned parenthood (e.g. what most people call reuse, and are shocked to find slows projects down), and most of all operator overloading (I'm still stunned by the sheer stupidity of overloading If you avoid these constructs you will soon find that any C effort could be sped up several-fold by the addtion of polymorphic function signatures, single inheiritance, namespace separation, data hiding and the (slightly) improved memory management.

    Now, all I need is something that gives me the speed and low-level handling of C++ plus the high-level abstractions of Perl. For now, I will have to resign myself to calling C++ from Perl using XS.

  • Compile kernel 2.0 only with gcc 2.7.2; compile kernel 2.2 with -fno-strict-aliasing enabled as a CFLAG in arch/i386/Makefile, and you'll need to edit the configure script for glibc2.1.1 to get that working (and, of course, you need to set CFLAGS=-fno-strict-aliasing with this as well)
  • Ever notice than when compiling nbench with a bootstrapped egcs and -march=i686 you get an outrageous result for neural net? Is there some way to get the Pentium II optimization without breaking nbench?
  • AFAIK a linux kernel written in Ada would never get up to the complexity level of Minix, because all the coders would get fed up.

  • I think it depends on the project. In some places Java speeds up the process of coding because so much is abstracted. And smalltalk can speed it up even more (although I'd use Java or C++ before Smalltalk), but don't waste your time with "Hello world!" and trivial stuff like that in Smalltalk...
  • -----quote-----
    C++ is useless trash. People who know better avoid it like they do the plague.
    -----end quote-----

    Um, no. If your statement is true, then a myriad of proggies and libs that we know and love (e.g. Linux, Gimp, Apache, Samba, Sendmail, Gtk+, Qt, etc. and then everything built on top of them) might not exist. At least they wouldn't exist in the form that we know them now.
  • > Argh... Oh, and the default hash function knows > about char *s, but the default hash EQUALITY > routine just compares addresses!!)

    From an OO perspective, that is 'equality'. Two
    objects that happen to contain the same
    information, are still two different objects,
    and thus not equal.

    Johan Veenstra
  • Modula 3 is um, somewhat simpler than Ada, and doesn't have the burden of the huge design process.

    Effectively, M3 was a "one group" implementation, a Modula family language produced by DEC.

    It appears to be about as portable as GCC, and the SPIN OS is written in it.

    I could imagine worse ideas than trying to build a kernel in Modula 3...

  • I write and maintain test software for a large
    product line. Our enviroment has been defined
    as ANSI C. With the number of developers on two
    continents working in the code, that we have,
    standards are critical. Resently we have been
    running into difficulties. To solve them we have
    been implimenting some OOP concept in regulare
    old ANSI C. Monday I will be submiting even more
    changes in that direction to my team for review.

    For some tasks OOP is the only way to go. We will
    eventualy be migrating to C++. Having the path to
    OOP that C++ provides will make my job much easier.

    Saying that C++ is trash is silly, and demonstrates
    a very myoptic view of programing. As with all
    languages, there are "features" that should be
    avoided. But data hiding and single inheretance
    are incredably valuble. Especially in instrument
    control.
  • The reason that I care that the executable grows to 130MB is that I HAVE TO WAIT FOR IT TO LINK!

    I guess I am spoiled by 30 second turn arround times on other platforms to link my app but 8-10 minutes to make a small change in one file and launch the debuger is way beyond unacceptable.

    --BQ
  • He's not saying better for EVERY processor, just specifically for Pentium and better (?) Intel CPUs and other similar CPUs (I'm guessing on the latter account)...
  • No, I meant what I said. The only way to do generic types for a container in C/C++ without templates (afaik) is by casting a pointer to what you want to store to a void * when you put it into the container, then casting back to a (whatever) * when you take it out. However, C++ won't (and can't) ever check, not even at run-time, to see whether the void * that pop() returned really is an int *, like your cast asserted, or in fact a SuperComplicatedObject *, like it was when you inserted it. That means that you can manufacture huge headaches for yourself, not the least of which are the fact that C++ will happily figure out for you what your SuperComplicatedObject would have been if it were an int, so you'll just see the problem as bizarre run-time behavior that you can't figure out.

    If you use STL, you'll try to compile your program, and it will say:

    gcc: error in myFabulousProgram.cc line 85: You can't assign an int * the value of a SuperComplicatedObject * without an explicit cast, you bonehead!

    Which is a whole lot better.

    Templates are good. Void pointers are evil. Think of them as Obi-Wan Kenobi and Darth Vader, respectively.
  • When compiling 2.3.12 (the latest dev series kernel), I went to add the -fno-strict-aliasing but found that the makefile already tested if the compiler supports this and was adding itself. - So no edit needed.
  • >Also it allows you to dynamically (ie at runtime) interject a class in your hierarchy

    How is this of value? Deep inheritence
    heriarchies are evil enough. I can't imagine
    wanting deepen them at run time. Sounds like
    a maintence nightmare to me.
  • You're quite right that it's easier to write inefficient code in C++. There can be methods invoked where the coder doesn't expect, yielding correct behavior and yet crippling performance.
    Ironically, for the same reason it's easier to write more efficient code as well, since complex data structures can be used with a minimum of effort. So it comes down the the coder's skill -- you'll tend to see greater extremes of *great* and *horrible* code in C++.

    The biggest gains are from using better *algorithms*. E.g. going from O(n^2) to O(n) is going to matter a lot more than a constant 5-10% gain.

    Most of the time it's only about 20% of the code in a project that needs to be fast. Profiling can help you identify that 20% and get it up to speed.
  • Best place to try is ftp.redhat.com/rawhide [redhat.com], or ftp.redhat.com/contrib [ftp.redhat], and if cant find it there try www.freshmeat.net [freshmeat.net]. If don't know how to make rpm spec files, there are tools available that could help you to make rpms:

    installwatch

    gnome rpm work station

    I could list more....just go to www.freshmeat.net and search for rpm.

  • You can't get RPM's for this yet, it's just been released! You'll be able to get them when someone makes them, and you'll surely be able to get them at the Rufus RPM Repository [w3.org]. In the mean time, you can try compiling it yourself from the released source. Even better, you can make your own RPM of it, go to RPM.org to learn how (Maximum RPM is a great book).

    ----
  • I'd agree with all that with the exception that the templates in the STL are incredibly useful and ought not be avoided.

    Templates themselves are wildly useful, it is just that you really have to know what you are doing to get anything positive out of them. I'd say, don't use templates unless you understand exactly how the STL works, and then only use them if you can't do the same thing easily without them.

    Operator overloading should only be used in certain cookie cutter ways. Creating '>' operators for io in a new class. Creating an assignment operator for a new class. That sort of thing.

    IMHO, most of the "problems" of C++ come from people who overuse inheritance . Huge inheritance hierarchies are nearly always a mistake. Much better are a bunch of smaller, completely independent classes. (You know, "modularity", that thing they told you was good in school.)

    The other mistake is going in expecting something like Smalltalk and then getting pissed because it is not. Part of the whole point of C++ is that you don't have to make every single line of code object-oriented. I know that is heresy for many OO types, but in the real world it is damn useful to have a language that is OO when you want it, but only when you want it. That is why C++ is so much more popular then languages that enforce OO.

    Personally, I think anyone coding in C should switch to C++ even if they aren't going to go OO. C++ has some extensions over C that allow you to improve structured programs as well. Just remember that typing "g++" does not mean you have to start the project with "class Object".

  • I wonder how many other programs will show this same problem. I would be very surprised if the Linux kernel is the only thing which has aliasing problems.
  • I once did a real world project in a OO fashion using only C as a test and while it was certainly possible, it was certainly much more work. The program worked well, was (IMHO) fairly easy to maintain, blah, blah, blah, but I could have easily written a similarly good C++ program in about 1/3 the time, and it would have been even more easy to maintain.
  • Will it be necessary to rebuild glibc2.1.1 after upgrading to gcc2.95, or should the glibc I built with egcs1.1 continue to work?
  • by Anonymous Coward
    Being a C++ programmer myself, I partially agree with the C++ critique you mention. The article has many good points. Unfortunately you forgot to mention that the main point of criticism in this article is C++' close relationship to C.

    While being ugly from an OO point of view, the backwards compatibility with C has huge advantages for real world programming since many libraries are still written in C. And it helps the open-minded of the crowd of C programmers to upgrade slowly to more modern techniques.

    Strange alliances here in this tread. On the left side you have the pure OO-people who think C++ is too C-ish, on the right side you have the C hackers who think C++ is too much of a high-level language. In the middle there are we: the poor pragmatic C++ programmers who actually write most of the exiting software these days ;-)
  • by Axe ( 11122 )
    ..then buy yourself a compiler you like. Or write one. ;)
  • What the hell are you talking about? Hashes aren't even part of th STL. What lib are you using?

    We use STL all over in our software, on various UNIX platforms. Maps and vectors all the way (where in earlier times we had to fiddle with malloc/realloc and free, now everything goes easy and there are no more forgotten free()s thus no memory leaks), never had any problems.

    Most UNIX C++ vendors offer good STL implementations. The only one, alas we had to code around bugs is GNU's libstdc++. I know that they've worked a lot on it since the release of EGCS, so I hope that GCC 2.95 brings improvements in libstdc++ so that it becomes useful too.
  • I've started using python for a lot of my projects (home & work). For example, a recent project I did at work used python for everything except the hard core data structures. These data structures were managed by a bit of non-OO C++ code. I'd have used python, but when you want to keep 100 million objects in ram you notice the overhead :)
    There are times when writing full-OO C++ makes things a lot more difficult. The C++ portion of this project was only 2500 lines. Cool things about non-OO C++:
    + declare your vars immediately before use.
    + string, queue, and friends
  • Hey, just ran a benchmark on the binaries in different areas comparing pgcc to gcc 2.95 using the hwinfo2html program (http://rob.current.nu) and compiler flags "-O9 -mpentium -ffast-math". I found that pgcc has a slight edge in performence at this stage, but not by a noticable amount. Have a look here [highway1.com.au]. Most interesting are the nbench results as the others shouldn't be affected. Lucas
  • In short, CHILL is a language that you DONT want to use. It's there for two reasons. One is as a simple example front end, and the other is that some poor sods actually use CHILL, and paid Cygnus to maintain the CHILL frontend.
    John
  • 4 Meg executable swells to a mindboggling 130MB of god-knows-what nonsense when the -g flag is on.
    Speaking only for myself, not the EGCS people: why do you care? It's the debugging version, not the production version. I realize that's harsh, but for the most part my binary sizes don't get much attention from me until after I turn off -g, enable a string of space-saving optimizations, and then strip the binary.

    Ok, what exactly does that do to the compile/test/debug cycle. I know that a small program that can be held in your head can be debugged there, but larger programs really need to be built and tested -- and that's where the ~8min's+ per compile becomes penal.

    p.s. Has anybody thought about a gcc based build environment that can be distributed across PVM's? (One aspect of this is the obligatory /. 'beowulf == k0o1' comment, but the other relates to those Multi StrongArm PCI boards, which could make an awesome compiling machine, given that compilers don't generally use much FP)


    John
  • Perhaps Linux does hold back the development a bit, but that's also a good thing. Having everyone add in their own favorite performance tweak, or little fix, will eventually bloat the codebase. If Linux makes a serious mistake, all the other big names will corner him and let him know, but if his only mistake is being a little timid with the stability of the OS...

    Besides, he's not going to veto something if it proves to work consistently in test releases. At worse it'll just take a year or so for something to be proven to the point where it gets folded in.
  • You don't say what platform/language you've tried gcc on but the linker can have a big effect.

    On Solaris, my egcs c++ builds are vastly smaller and quicker if I use the GNU linker. If throws away unused virtual function calls etc. If GNU ld is available on your platform I recommend you give it a go. Of course it is already present on Linux.

    Alex.
  • I'm doing 2nd year computer science and from what I've seen of Ada (which is a lot) it makes bloated binaries, runs relatively slow and tries to cut off all entire low level operations from you.
    Granted it's a lot more robust but the costs are too high.
    Maybe for some certain apps it would be okay but as far as Kernel is concerned you need something with real speed and portable such as C/C++.
  • The other thing about C++ is the STL. Don't even make me talk about the STL, I just tried to use it in a program and I'm now trying to remove all references to it. [...] my hash tables [...]

    Hash tables are not part of the standard library as defined in the C++ standard. Thus it isn't too surprising that they might cause you problems. You can't blame the language for features that aren't part of it.

    If you took two char *'s and compared them, it would compare the addresses. Don't expect the default behaviour of C++ to do anything different. If you wish to compare based on the contents of the strings, you should be able to provide an alternative comparator in the template.

    (It's hard to include much code in Slashdot due to character munging. If you want specific help, demunge my e-mail address and e-mail me.)

    For example, if you had a std::set of char *s, it would just compare by value. But if you do
    std::set mySet;
    Then it will sort using the () operator of MyComparator, which you can tie to strcmp or something like that. You might also want to consider using a string type instead of char*'s, for internationalization reasons.

    One of my co-workers starting using STL two days ago, and came to me for a little starting help. Two hours later he came back and was gushing about how cool it was to encode powerful, efficient algorithms in just a few lines of code. Perhaps you just need another me handy? Ask for help on comp.lang.c++.moderated.

    You can have my C++ compiler when you peel it from my cold, dead fingers.
  • Gtk-- is the C++ wrapper for gtk+.

    /mill
  • Indeed we should use "Hello world!" to compare languages and toolkits. It is the ultimate test and since we all are working on making the next great "Hello world!" program it is the one and only test case.

    print "Hello world!";

    Perl rules.

    /mill
  • AFAIK a linux kernel written in Ada would never get up to the complexity level of Minix, because all the coders would get fed up.
    "AFAIK"? That tells me that you know very little. Some of the most complex programs in the world are written in Ada. It scales far better than C or C++.

    Many in the hacker/free-software/open-source communities disparage Ada because:

    • They were forced to use Ada83 in an undergraduate programming class.
    • Their friends and role models disparage it.

    I can understand why a hack programmer wouldn't like Ada (which is what we now call the modern OO language formerly known as Ada95), but most software engineers and disciplined programmers absolutely love it. Loving it, and being able to use it in a project because of political reasons, are often two different things. But on a purely technical basis, Ada rules for complex programs.

    On the chance that anyone here might like to learn more, maybe try GNU Ada on their Linux box, see the Home of the Brave Ada Programmers [adahome.com], the starting point for All Things Ada on the web.

    And I have to agree with the first poster--an Ada kernel would kick some serious butt. But I'm not convinced that it will never happen. That's what they said about the rise of Linux, and the decline and fall of Microsoft.

  • Anyone want to comment on the speed of g++ compiled code compared to other C++ compilers? I'm writing a performance-critical C++ application and wouldn't mind getting 10-20% speedup for free. But if g++ is within 1-2% of the fastest out there, it's not worth messing with for now.

    KAI C++ makes grand performance claims, and Comeau is another compiler built on the same ECG backend (but much cheaper). I'm mainly interested in the Linux platform for now.

    Notes:
    1. Language trolls can buzz off, I'm aware of the performance issues of using C++ vs. C.
    2. I know KCC has a time-limited demo, and I've downloaded it, but it looks like it's much stricter about the C++ it accepts; it might take a while to get my code to compile with it, which is why I'd like to get some feedback before deciding whether or not to mess with it.
  • From an OO perspective, that is 'equality'. Two objects that happen to contain the same information, are still two different objects, and thus not equal.

    This is not correct.

    Equality for char *s is comparison of addresses, because they are pointers. Compare two pointers and it's the addresses that determine equality. Strings stored in char *s are not truly objects. If you want to compare string contents, you should use a string object of some sort or a comparator function. If you want to use char *s with an STL algorithm, you'll need to provide a comparator function object that does the comparison technique you desire. For example, I have two sets, one of which has as its keys iterators into the other set. By providing a special comparing function object, the iterator set is sorted on a different property of the key than the other set.
  • C++ has it's pros and cons like every other language. Likewise, it is better suited to some applications than others. C++ code can be syntactically cleaner, and more maintainable, or it can be MFC.

    It's biggest drawback is that it can obfuscate inefficiencies in code. What looks like a simple assign (or memcpy) can end up serializing and parsing at a high cost.

    It may not even be bad coding when the objects are implemented that way (perhaps the 'assigns' are never done in performance critical areas and the marshalling is needed for interoperability.) The problem starts when sombody else has to maintain the code. They end up loosing the advantages of OO code because they either have inefficient code, or have to study the objects and their relationships just as deeply as they would in C.

    Personally, I find that by the time I dig through C++ code (for the fabled ease of code reuse) to check for those problems, it would have been just as easy to cut/paste C code. If the objects are available in library form only, it just means that you either end up with wrappers looking like chinese boxes (so that doing anything involves too many pointer dereferences internally), or you end up with a dozen similar objects that all inherit from the same base and do the same thing in slightly different ways.

    C++ does have a few nice features. Most of those seem to be in the process of back-porting into the C standard.

    Some things call for C++, others call for C, Perl, or Lisp. Some few things even call for assembly.

    While it is true you can write OO-like code in C, why would you want to?

    In cases where OO is the right approach, but I want to make the cost of various operations quite explicit. The Linux kernal is written that way.

  • The main webpage & what's new on www.gnu.org should be updated too.
  • Hey on most Linux distributions I use its just % /usr/bin/hello

  • by Anonymous Coward
    > There have been some heavy debates about this on the kernel mailing list and apparently the
    > communication between the egcs team and Linus was not very successful :-/

    I don't understand that...

    What do you expect?
    Should the egcs team hack the compiler to work around "missbehaviours" in the linux sources?!?

    Linux that has the problem, *not* egcs. Nearly all other pieces of software (incl. other kernels) compile and work just fine with egcs.
  • Personally, I began to really like C++ after STL appeared. IMO it is much more interesting feature than OO stuff. Well, for the thing I do...
  • I'd agree with all that with the exception that the templates in the STL are incredibly useful and ought not be avoided.

    Bingo. I found that most people trashing C++ had not update their knowledge since late 80s, early 90s. ISO C++ is a different beast.

    For most data analysis code C becames ugly and incredibly error-prone.

    Just why should I use char** for a vector of strings? With all related malloc() nightmare?
    Why write my own containers, or reuse somebody's library with obfuscated interface? Hunt down stupid pointers. Bleh...

  • by Axe ( 11122 )
    I bet you have no idea how to use STL.
    I know, it can be hard to start for many slow-minded people, but if you persevere and actually learn how it works (including some internal details about its built in memory management) you will understand that it is the best set of containers to use among almost all languages (taking into account both performance, flexibility and easy of use)..
  • It's easier to shoot yourself in the foot trying to do what C++ does in C. Protected class members make it much easier to keep your code as OO as possible.

  • + Ok, what exactly does that do to the
    + compile/test/debug cycle. [...] that's where
    + the ~8min's+ per compile becomes penal.

    Do you want the 8 minutes, or the 130 MB?

    I was defending the binary size, not the compile times. :-) Yeah, the lack of preprocessed headers can really hurt once they stop changing, but like I said, I (and others) have been surprised at the difference some extra coding makes. Things like redundant include guards really do honestly work on a large project, and testing-in-isolation helps you to run that
    debug cycle much quicker, with tiny little testcases.

    It also helps to set TMPDIR to a ramdisk.


    + Has anybody thought about a gcc based build
    + environment that can be distributed

    I'm not an expert (yet, dammit :-), but I have to wonder whether distributing a compiler would be worth the overhead. Distributing the entire build, though, where each machine works on a different source file, is very cool, and is just the next logical step in "seperate compilation." Dunno about splitting the individual compilations of files... I wonder if/how Plan 9 does this.

  • I have found STL to be one of the most useful features of C++.

    I agree completely.

    I've always had a certain amount of ambivalence toward C++. On one hard, it addresses some of the short comings of C; but on the other hand, it has gotten so complex that it rivals Ada in this respect. Probably it's best feature is that it lets one pick and chose the features and paradigm that one wants to use, unlike some languages (which I will not name here, lest I be flamed to a crisp by some language x zealot) which forces one into a certain paradigm.

    But I really like generic programming in general, and STL in particular. It's unfortuanate that so few people have experimented with this type of programming.

    TedC

  • Or when you treat your customers as scum.

  • Hmm... well, if I was already off-topic enough to mention the interjection thing, I suppose it can't hurt to be off-topic enough to explain it... =)

    I haven't ever used that feature myself, and it is certainly good and probably advisable to write programs that don't use it. I think it's one of those things that you want to avoid if you can, but that can really be a lifesaver every once in a while. A situation I can imagine- say somebody wrote a Car class, and subclassed it to Toyota, Honda, Ford, Chevy, et cetera, and then models for each company, from which you are deriving your instances. Now you realize that you have a need to override a method in the Car class for cars built after 1985- so you'd really like your class hierarchy to have Car on top, with all of the companies models as subclasses of Car, and then NewCar as another class, also with all of the companies underneath. There are two problems, though- first, you're duplicating code for a potentially huge hierarchy, which is never good. Second, you may not have access to the source of the classes you're using, or permission to change them, in which case you're SOL with Java or C++. You could make a NewCar subclass of every model, but then you're duplicating your method a huge number of times, and it's a big pain too. The best solution is just to be able to take an object and figure out dynamically whether it's a Car or a NewCar- at some point after all the fields are set, just say "if you were built after 1985, interject NewCar in between Car and your company."

    I should again mention that I'm not a hard-core object-oriented Schemer, so I don't really know if that's the point of it or not, but it does seem that it would be useful occasionally.

    (Incidentally, I think JavaScript allows for the same sort of thing. Just kinda funny. I'd hate to think that JavaScript might be the choice of a new generation...)
  • The faster the code it produces, the better it is.

    I know this won't go over real big on a forum dominated by Linux users, but VC++ generates some really fast code. It's not just simple trimming CPU cycle stuff either; they (MS programmers) seem to be doing a pretty good job of recognizing patterns in code and replacing it with faster algorithms when it's "safe".

    TedC

    "There ain't no such thing as the fastest code"
    Michael Abrash

  • Actually, the STL isn't broken at all because of that inconsistency. What's broken is the C-ism of using char *'s as strings. The hash table hashes char *'s based on all the data from the address until the next '\0' not because the programmers were morons who didn't know what they were doing, but because in C people use pointers to characters as an expedient hack for a string data type, which C is sadly lacking. C++ with the STL fixes that with the string class, but if you insist on giving a char * a meaning that it doesn't have naturally, then you don't have the right to get upset when the programmers of your tools make concessions so that you can use it with a minimum of hassle.

    If it bothers you that you're really doing a pointer comparison when you say (charPtrA == charPtrB), the solution is to use STL strings instead, which hash just fine with no inconsistency of hashing vs. equality (that is, *stlStringPtrA == *stlStringPtrB if they have the same string value, and hashValue(*stlStringPtrA) == hashValue(*stlStringPtrB) as well). In fact, you should use STL strings for everything anyway if you're programming in C++, because they're far superior to char * pseudostrings, whose disadvantages are myriad (such as segfaults if you don't do things to them that would seem illogical to do to strings, linear time length-finding, non-growability, and so on), and whose only advantages are that they're easy to implement and have very small memory overhead. But you don't have to implement strings yourself, and I don't know about you, but my computer has enough memory that it can spare a few bytes if it makes my life easier.
  • Try Magic Exec and parallel make for MOSIX from: http://www.mosix.cs.huji.ac.il/txt_contrib.html http://www.cs.huji.ac.il/mosix/
  • Am I the only one who thinks gatekeeper.dec.com is slow on mirroring prep.ai.mit.edu theese days? /me is about to change fave GNU mirror :(
  • Pardon my idiocy, but I have a few questions since I am so thoroughly confused.
    Is EGCS going to replace GCC? Is it going to be/already is GCC? Is it going to live side by side with GCC?

    My confusion stems from the fact that I seem to remember something about a name change for EGCS being posted on Slashdot not too long ago, and being fairly confused then. Now I can't remember what the name of my favorite compiler will be, and what neat new tricks it will have nestled inside.
  • While I love C++ (when I'm not hacking together some assember - what a paradigm shift), as with all powerful things, the language can be horribly abused: derivation in place of aggregation, and so forth.

    C++ adds to C the sort of things that serious programmers need to manage large projects. In many cases these things are overkill for small ones and just obfuscate things. Of course, it has a strong OO background (but that just reflects that sometimes an object-programming paradigm is what is desired).

    To truely appreciate what C++ can do fo you, read "Design Patterns" by Gamma, Helm, et. al.
  • What decline of Microsoft? Do you have a time machine I could borrow?
  • by Axe ( 11122 )
    ..buy GCC from Cygnus? I have heard that the commertial version that they sell (GNUPro and
    CodeFusion) has new IA32 backend already installed. According to the ad for the CodeFusion on the Cygnus site, it has 80% speed increase over "net egcs" for Pentium II for specific benchmarks (its on par with Intel's Proton(?) compiler on that, and 30% faster than VC6.0)
    Problem is CodeFusion is not shipping yet..Should be a matter of days(?). Don't know if the latest GNUPRo has the same optimizations.
  • by Axe ( 11122 )
    Here [cygnus.com]

    Citation..
    (1) Code Fusion produces code that is 85 percent faster than the current Net GNU release, 20 percent faster than the Microsoft Visual C++ 6.0, and equivalent to Intel's Proton compiler. These results are based on the Integer Index performance in the byte benchmark.

  • Oh lord, another language war.

    C++ and other OO languages have been proven to be more efficient for programmers on projects larger than 100k lines.

    It takes less time for a programmer to start working with a large codebase written in C++ than in C plain and simple.

    For example, I was able to make modifications to a bug in Mozilla roughly 20 minutes after I first took a look at the source. When you break your namespace for functions and variables down into peices the way C++ does, it is easier to digest.

    While it is true you can write OO-like code in C, why would you want to? C++ is just an extension of C; all the functionality of C is included in C++ at a minimal cost.

    --
  • So, has anyone had anyone had any luck getting this to compile with
    last1120c or prestruct-c?
  • What is the performance of binaries compiled with gcc 2.95 compared to that of binaries compiled by pgcc and will the source changes that pgcc has made to egcs be envtually returned to the main source tree?

    Cheers
    Lucas
  • Nope, the new_ia32_branch is not folded into
    2.95. If you cvs co it its version number is
    2.96 right now, but I don't know when the merge
    will occur.
  • When the alias issue came up, Linus told the compiler people to use a set of rules to turn aliasing off, instead of immediately correcting the bug in Linux (it is a bug, because it violates ANSI C aliasing rules, and he was shown at least two ways to fix it.)

    That's not good engineering, to write buggy code and expect other people to fix the tools to work around your buggy code.
  • To compile Linux 2.2x with gcc 2.95, you need to include the flag -fno-strict-aliasing

    This flag is needed if you have code that does things like try to read an array of longs as an array of shorts, without using unions.

  • by Anonymous Coward on Saturday July 31, 1999 @12:23AM (#1773287)
    Since nobody here have really said it I want
    to say thank you to all developers that spend
    their time making gcc better.

    I use gcc/g++ almost every day and I am very pleased with it.
  • by Jonas Öberg ( 19456 ) <jonas@gnu.org> on Saturday July 31, 1999 @12:31AM (#1773288) Homepage
    Now then, before anyone comes bursting in saying that we havn't updated the web pages: look again. I just updated the web pages on www.gnu.org and I hope that the steering committee will update their web pages on gcc.gnu.org soon too. It's the idea that eventually the web pages from gcc.gnu.org will magically appear att www.gnu.org, possibly with some small stylistic changes, but we havn't gotten around to this yet.

    As a blatant plug I'd also like to say that the GNU Webmasters need more help. Do write me if you want to help out.

  • Will pgcc be merged with gcc, or what is that status of that?
  • by Kento ( 36001 ) <kent.overstreet@gmail.com> on Saturday July 31, 1999 @12:41AM (#1773292)
    I was just reading the linux kernel mailing list (which i *just* resubscribed to) and apparently, the kernel won't compile correctly with gcc 2.95. Here's an excerpt:

    The linux kernel violates certain aliasing
    rules specified in the ANSI/ISO
    standard. Starting with GCC 2.95, the gcc
    optimizer by default relies on these
    rules to produce more efficient code and thus
    will produce malfunctioning
    kernels. To work around this
    problem, the flag -fno-strict-aliasing must be
    added to the CFLAGS variable in
    the main kernel Makefile.

    Disclaimer: I haven't downloaded the new compiler and so I haven't tried it yet, but keep this in mind when you upgrade gcc.
  • This is all great of course (new releases always are), but would it have been too hard to post a little note with upgrade consequences with this announcement?

    Will it break my system? What are the requirements? Will the kernels compile with it? how about other large projects such as X, Apache, Gnome, KDE.

    The compiler is one of the core programs if you don't use binaries so the over-all stability of your Linux box greatly depends on a good combination of compiler, libraries, includes etc etc.

    Is there a HOW-TO available on this? (not just saying: "old goes with old and new with new" or providing upgrade instructions for a specific compiler but more a general document describing how some parts of the system work together and how to make optimal use of that)

  • by JoeBuck ( 7947 ) on Saturday July 31, 1999 @12:17PM (#1773301) Homepage

    As a member of the gcc/egcs steering committee, I would be very happy if, in fact, GCC were "the finest compiler on the face of the earth". It isn't. It is the most widely ported compiler, and it's decent enough, but many other compilers beat it for code quality.

    gcc 2.95 still isn't that great on Intel-compatible processors. The good news is that we finally have a new IA32 back end, produced by Cygnus under contract, that has been donated; we'll finally have enough accuracy to do instruction selection and scheduling properly. It will be in gcc 2.96 (or whatever we call it). Until then, pgcc will be better (I believe that the new back end beats pgcc at least most of the time).

  • The need for pgcc should be eliminated by the new IA32 back end (which isn't in 2.95, but will be in the next major release).

    I don't believe that there is any copyright assignment issue for the pgcc patches. The real issue was that Intel didn't do things "right". Their way of modifying the compiler was to ignore the distinction between the front end (portable) and the back end (processor specific), so pgcc is an Intel-only compiler as a result. This means that the Intel patches would need to be redesigned. Some of them already have been and are already in.

  • We tried to ask you, but since you posted as Anonymous Coward we couldn't find you. Since we lacked your enormous wisdom, we, in our lesser wisdom, called it GCC.

    We considered the name "gcs". We decided to keep "gcc" because that's the name most people know; had we chosen any name but "gcc" some clueless folks would have kept running gcc-2.8.1 forever. Besides, we'd break every makefile in the world if we changed the name users type.

    Don't get hung up on capitals versus lower case. Remember, when you invoke the command "gcc", it will happily build C, C++, Objective-C, Fortran, Java, or Chill for you. So gcc is the GNU Compiler Collection.

  • by sterwill ( 972 ) on Saturday July 31, 1999 @01:16PM (#1773314) Homepage
    I understand how easy it is to criticize Linus from behind the security blanket you call anonymitiy, and I can't disagree with all your points, but I'm not sure you understand Linus's position (perhaps you understand it better than I). The kernel tree is very large, and I wouldn't want the job of coordinating all the development that goes on within. Just reading the linux-kernel mailing list is a job in itself.

    I admire the job Linus is doing but I can't imagine a single person doing it any better. I've always found Linus a humble guy in the end. Show him something's really better your way, and in a way he can understand (considering the other work he does), and often he'll adopt it. He's made an occasional policy or kernel decision that might have upset me, but coordinating public devlopment can be acurately described as herding cats. It's an endless series of tradeoffs because everyone wants to go his own way; sometimes patches are rejected, sometimes the ideas they implement become the center of rabid e-mail debate, and sometimes they're quietly applied.

    The sheer numbers of developers and different directions and goals of each pulls on Linus in a different direction. I've used Linux on a variety of architectures, and I wish Linus would focus more on keeping 'stable' release of the kernel clean across all architectures--these are the quality issues of which you speak. 2.2.10, for example, won't compile on an Alpha.

    I can't say I see Linus holding back the performance of the kernel for reasons of stubbornness. Most of the performance improvement suggestions he receives present a risk in other areas (stability, security, portability). He's making trade-offs, and I personally admire most of the decisions he's making.

    Scalability and performance issues are most often encountered when trying to make the Intel architecture do something it was never intended to do: be scalable. I think it's unfortunate that so many people continue to squeeze this twenty-year-old idol of cruft into such places when cleaner and better engineered alternatives exist.

    I think he's doing a very good job.
  • by Anonymous Coward
    That's nothing new. The same statement was also true for egcs 1.1.x. Compiling the Linux kernel using egcs was always dangerous since it relies on some non-standard aspects (bugs) of gcc 2.7.2.
  • by Kento ( 36001 ) <kent.overstreet@gmail.com> on Saturday July 31, 1999 @01:38AM (#1773318)
    The 2.0 kernels used illegal inline assembly constructs which just happened to work with GCC 2.7.2 . You could always compile the 2.2 kernels with whatever compiler you wanted. From what I can tell, this is different, and it affects all kernels, including the 2.2 ones.
  • Hmmm.... You'd think slash would be integrated with a spel checker by now.
    -russ

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...