Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

GCC's Response To Red Hat 207

The GCC Steering Committee has issued a statement on the use of snapshots in distributions. This statement is clearly in response to Red Hat's use of gcc-2.96 in its Red Hat 7 release. They didn't like it very much, and there are compatibility problems. Worth a read. Credits for this news goes to Linux Weekly News.
This discussion has been archived. No new comments can be posted.

GCC's response to RedHat

Comments Filter:
  • I agree, 2.7.2 had so many problems, mostly when using the STL. I have some (legal) code for which even egcs 1.1.2 chokes at some places. However, I haven't been able to find a fault (so far) in 2.95.2, except that the fact the standard library isn't complete yet (like the missing stringstream class). Aside from that, I would quialify it as one of the most ANSI C++ compliant compilers around (at least, it's better than HP's ANSI C++ compiler!).

  • Sorry, my e-mail is mathgod79@hotmail.com. Please e-mail me if you can help.
  • by Anonymous Coward
    Having had a bad reputation with my local LUG starting distribution flamewars on their listserv, I've tried to keep a slightly lower profile.. But I laughed soo hard when I heard that redhat was shipping *TWO* gcc versions with their distributions, one for kernels (2.91) and one for compiling everything else.

    I also got LARTed, and told quite succintly to "Be Quiet" by the almighty moderator of the list.

    "Redhat is God, Redhat is right, Do not laugh at the Redhat".

    I *intensely* dislike redhat. Redhat users, and more importantly, those who don't recognise the need to criticise the things they love.

    I use debian. Debian has shortcomings, and I am quick to critisie if I find something wrong. This is something that alot of linux users really have to learn to do. I've seen so many people brough in by the propganda "Linux doesn't crash" "linux is more stable" "Linux is more up to date".

    Linux Crashes.
    Linux is horribly unstable.
    Linux is REALLY horribly unstable if you decide to stay on the bleeding edge.

    Redhat has made some pretty big mistakes. They've got entirely the wrong tack on releasing a distribution, (But redhat is so good!).

    No they're not.

    I've seen so many editorials and comments from both sides of the fence on RPM, on Redhat's corproritisation and varied other facets of the Organisation.

    Redhat will end up giving linux a bad name.

    They are adopting a commerical distribution structure.

    "We need a new version number!"

    (Why not Redhat95!)

    I believe debian (and varied *bsd's - I'm not calling debian the be all and end all) has a much cleaner and neater release structure.

    We have "Tried and True" "Getting there" and "Horribly Unstable". If you *want* to have all the latest crap, you can run *horribly unstable*. And we're more than happy to advertise that it WILL quite probably screw up your life, get your dog pregnant, and make your fridge say "ZOOL!".

    Redhat, on the other hand, will take all the most horribly unstable packages, wrap 'em up, make sure they install properly, and put 'em on a CD.

    This is exactly what happened with gcc 2.96 (We can't just put gcc 2.91 in 7.0! We'll be behind!)

    Steve.
    sjthorne@ozemail.com.au
  • Of course I can defend it - technically, it was the right decision and I still think we should have done it. We needed the features (like actually working, and support for multiple platforms), and as probably the only company have the inhouse expertise to stabilize it. The C++ compatibility issues are far less important - as C++ has never been compatible anyway.

    What we did wrong was not communicate our intentions better to the steering comittee.

    Releasing a "blessed" gcc (if not looking at our compat-compilers which are egcs 1.1.2) is of course not going to happen - they wouldn't be compatible, and the reasons we had for choosing to do what we did are still valid. We did the right thing, and in so doing improved the current state of gcc a lot. Be happy.

  • egcs still doesn't support the export keyword when applied to templates.



    What compiler does?

  • by tytso ( 63275 ) on Friday October 06, 2000 @06:39PM (#724645) Homepage
    As far as compatibility goes, glibc is the great challenge and C++ not that important: When binary compatibity is a high priority goal, C++ has never been an option when developing for Linux.

    You're right that glibc compatibility is more important, and I am hoping and praying that Ulrich Drepper doesn't see fit to make any changes that break compatibility between glibc 2.1.94 and glibc 2.2 (since RedHat released a pre-release beta snapshot of glibc as well in 7.0). I've asked on various mailing lists, and I worry that Ulrich has conspicuously not pledged not to make any compatibility changes. Shiver

    Still, there are quite a few applications that are written in C++, and like it or not, ABI compatibility is going to get more and more important as more and more people outside the traditional Linux user base reach out and start using Linux. This is a good thing, but it means we have to be more careful in what we release, going forward.

    And I'm sorry if people think I'm picking on Red Hat. I still use Red Hat on most of my machines, and I've been using Red Hat since its 2.0 release. I'm a stockholder of Red Hat, and I have many friends who work there. But to the extent that Red Hat is a market leader in the U.S., it means that its responsibilities are greater, and it needs to be held to a higher standard --- just as Microsoft has to be held to a higher standard because it is also in a dominant market position.

  • Has this burnt anybody yet?

    Yeah, Objective C support is somewhat broken. GNUstep won't compile (Internal Compiler Error). Still not sure why yet.

  • You just can't link to shared libraries that you don't control.
  • Not "Every" distro..
    Debian, Slackware and RedHat however have.
    (The Debian CD I tried to install required so much information from me it seemed I'd need 5 years experence with Debian before I could ever get this stupid CD to install)

    It depends on how cocky the distro people get.. Mandrake screwed up day one.. it took Slackware and RedHat years to make the same kinds of mistakes.
    Debian seems to go overbord trying to PREVENT similer mistakes and end up bombarding the user with questions he can't answer. (the inverse extream)

    But at one time RedHat and Slackware had a clean history...

    It happends to them all.. the key is to GET ON THEM ABOUT IT.
    Slackware wouldn't upgrade to glibc and they lost users over it.. They were not forgiven. They fixed the problem.

    You forgive them they won't do anything...
    So beat RedHat up over this.. Back off ONLY when they fix it...
    If someone else pulls a stunt.. beat them up over it as well...
  • Anyway, with what was shipped in the public beta two months ago I can't understand why the release surprised anybody. Going back from KDE2 because it was just too buggy is one thing (it's included as a preview, though), but changing one of the major subsystems like glibc or the compiler after that was obviously not going to happen.

    I asked a friend of mine (in kernel development) who worked at Red Hat, because I was concerned about the use of the prerelease glibc, and I was told, oh, don't worry; it's just a beta, if there are too many problems, or if the final release hasn't been released in time, it'll get backed out. So I held my peace. In retrospect, I probably should have pushed this issue harder with other Red Hat contacts at the time.

    So yes, it wasn't a complete surprise, but I was still disappointed how many prelease snapshots of many major packages (XFree86, glibc, gcc, etc.) were in the 7.0 release.

  • If you change the "cc" symlink to point to "kgcc" instead of "gcc", the kernel builds fine on RH 7.

    Having spent hours trying to find bugs in my code that turned out to be gcc 2.95.2 bugs I'm glad *something*'s being done about it. The 2.96 snapshot thus far is much better quality than 2.95.2, if a lot pickier on ANSI C++ violations (which is fine too).
  • I seems, to me, that since anyone could call GPL'd code their own, so long as they included a copy of the GPL with it, then why is the GCC committee bitchin about it? If Red Hat wants to shoot themselves in the foot and do this then who gives a crap? Isn't the GPL about free speech? Like I said, it's totally within Red Hat's choice that they can do that. Fine. Then I will stick with 6.2 or switch to Debian (contemplating it, but not made the jump...yet), or Corel or heck even FreeBSD. Personally, since I have been running a 2.4 test kernel for a while, I see no reason for holding it back (ie everything I use seems to work ok), except for those little things that end up mattering. O sure, I don't need say something like Very Large File System support or something weird like that now, but if it doesn't work and I need it later I'm screwed and I would blame the developers. That's why I really don't care about 2.4 being late. I don't want another windows and if this prevents it, so be it. I just want an OS to work. Personally, I don't think I have yet to see a case were a binary for one distro installed 100 percent correct on another distro. There are SOO many things that need fixed other then the kernel, it's not funny. Also, compiler's are critical since we do have to compile software sometime! I don't see WHY Red Hat did this except they couldn't program better I18N support in to their own stuff. It's not going to hurt Linux, just Red Hat. And Red Hat is just a distro and NOT the ONLY distro of Linux.

    :)

  • by JoeBuck ( 7947 ) on Friday October 06, 2000 @04:10PM (#724658) Homepage

    Richard Henderson ignores the issue of binary compatibility with other distributions, and, I believe, overstates the problems with 2.95.2. The Alpha back end isn't great, but ia32 which most folks use was decent and it was the best C++ front end we ever had. And the kernel developers did a lot of work so that at least the Linux development kernels build ok with 2.95.2 -- but "2.96" can't build Linux (gcc problems building the kernel are often kernel, not gcc, bugs, though sometimes gcc is at fault).

    Also, Richard is wrong when he says that their "2.96" is compatible with the forthcoming 3.0 at the source level. It isn't; it still uses libstdc++-v2 (the v3 library is not complete). Streams aren't templates, the standard library is not in the std namespace. It is compatible with 2.95.2 at the source level, not 3.0.

    Even so, I could have accepted his arguments much more readily had they been made before the release and not after, and if they had polled customers and software developers about the issue rather than just deciding internally.

    Now, I'm grateful for all the hard work the Red Hat/Cygnus folks have put in. But when different (GNU/)Linux distributions can't run each others' binaries, you have exactly the same situation the Linux company chiefs say they won't allow to happen [infoworld.com]: effective forking of Linux.

  • _ [slashdot.org]For the unitialed, NO COMPILER SUPPORTS THE EXPORT KEYWORD AT THIS MOMENT.

    Sorry.
  • I understand why they did this; GCC 2.95.2 probably doesn't have full 2.4 compatibility (I think, i'm no expert) but they couldn't wait to release a different version. After all this is a commercial company that has to keep making releases, even if everyone I know still uses RedHat 6.1 (at least those who use RedHat)

    Still, they must look pretty silly after this and the 2.4 kernel delay [slashdot.org] that will probably make 2.4 come out after 7.1 is released. So much for all that preparation to make RH7 2.4 compliant :-)

  • by JoeBuck ( 7947 ) on Friday October 06, 2000 @04:14PM (#724663) Homepage

    Where did you get the idea that 2.95.2 is "the buggiest release of GCC since early days"? Have you ever used it? Did you know that it is the production compiler on Debian 2.2 and they are reasonably happy with it? That it had some of the most thorough testing of any GCC release ever?

    I've been an active user of g++ since 1990. For C++, 2.95.2 is the highest quality release ever put out. The problems with 2.95.2 are platform-specific, the Alpha port wasn't great. Don't spread false information.

  • by edhall ( 10025 ) <slashdot@weirdnoise.com> on Friday October 06, 2000 @06:58PM (#724666) Homepage

    I agree that the C++ ABI issue is a horrible mess, but you really can't blaim AT&T for that. If the C++ ABI had been set back in early days, it would have had to be revised several times over the years as templates, namespaces, and so forth were added to the language. For this to have been any better than the present situation, a much more extensible object file format would have to have been created than we use today.

    The real issue is that C and C++ still use a flat namespace for symbols in object files, just like in the 1950's when assembler languages were all that existed. The minor additions made to ELF for static constructors and the like were the absolute minimum to get C++ to work--leaving no option but kludge-of-the-day name mangling to flatten C++'s multitude of hierarchies. This is so inelegant that no one wanted to define it into a standard -- surely a better way will arise? But it hasn't.

    CFront's name mangling could have been taken as the de facto standard and extended as C++ was extended. That didn't happen, and given that AT&T didn't "own" C++ the way that Sun owns Java, it probably wouldn't have happened even if they pushed it. But why perpetuate what is an ugly kludge, anyway?

    It's probably too late to fix the real problem. Rejiggering the entire toolchain for a new object file format just isn't going to happen. (Some people are still smarting after the change to ELF.) So we'll have to live for the forseeable future with fragile and incompatible attempts to fit N-dimensional structures into flat lists...

    -Ed
  • by leereyno ( 32197 ) on Saturday October 07, 2000 @01:02AM (#724669) Homepage Journal
    Has anyone ever stopped to wonder whether redhat did this to CREATE incompatibilities? Redhat is the market share leader. If someone distributes a binary, it is most likely going to be for redhat. What's an easy way to "compete" with other distributions? Make sure that binaries for your linux won't run on other distributions. This forces companies that want to distribute binary-only packages, or packages that are not easy to compile, as multiple binaries thereby confusing the scene. If other distributions follow redhat and use this compiler, well then you've got a situation where redhat is the leader and everyone else is trying to be compatible with it. You've got that already to some extent, but this would make it even worse as distribution developers would have to work to stay RH compatible.

    I don't know if this is what redhat is doing, but it certainly is interesting.

    Personally I think they've pulled a DOS 4.0.

    Lee Reynolds
  • by hugg ( 22953 ) on Saturday October 07, 2000 @01:32AM (#724672)
    Yes, RedHat is evil (in this circumstance). Releasing snapshots of absolutely-critical components of an OS is evil. Whatever their technical/marketing reasons for the inclusion, it was irresponsible. Free software should enjoy the same respect of version control as do commercial products. To do otherwise weakens the notion of free software products as being stable, manageable entities, and will lead people to choose products that noone ever got fired for choosing (VC++ 6.0, for example).

    It's a matter of whether you believe "the ends justifies the means". Sure, Linux is all about practicality, but I would also like a little discipline in my commercial distro. (that's why I run FreeBSD)

  • by mindstrm ( 20013 ) on Friday October 06, 2000 @07:11PM (#724675)
    What gets me is this: why the mad upgrade cycle? I mean, I understand why redhat has to release a new 'version' of RH now and then... to keep sales up, but...
    That is SOO microsoft. We don't NEED an upgrade every year.

    My problem, though, isn't with redhat releasing them.. it's with people who decide to 'upgrade' their systems. Feh!

    Why do you upgrade something if it's not broke?

    Rule #1 of systems management: if it's not broke, don't fix it!
  • Did you file bug reports? Are you sure that your code that 2.7 accepted was good code? (gcc 2.7 and earlier accepted all kinds of crap that is not C++).

  • I saw this message posted to Red Hat's bugzilla earlier today. The first thing that went through my mind was that I would NOT cause public panic by sending this to the SlashDot admins until I had first made a tactful inquiry to Red Hat regarding the situation. The second thing was that someone else would anyway.

    I'm a programmer. I'm not on the gcc lists, and I don't stay abreast on the current version or issues so the message from the steering commitee scared me. I felt let down by Red Hat. However, having read your post (particularly where it regards the FreeBSD developers), I feel somewhat relieved.

    If gcc 3.0 is released during the life cycle of Red Hat 7.x, then will Red Hat be able to include compatibility libraries for c++, and otherwise upgrade to gcc 3.0 final?

    I'm still hoping that Red Hat responds to developers in a reassuring manner soon. : )
  • I maintain Hercules [conmicro.cx], an IBM mainframe emulator for Linux. I distribute both source code and prebuilt RPMs. Many of my users are Linux newbies, although not computer illiterate. It's hard enough making sure they use a recent enough GCC (at least at an EGCS-built level), and now Red Hat has to go and do this?!


    What, exactly, does the gcc team mean by "binaries won't be compatible"? Will an RPM built on a RH 6.2 system run on an RH 7 one, and vice versa? How about other distributions?


    I'm as happy as the next guy to see progress being made on gcc - I've spent more time than anyone should on chasing down optimizer bugs - but if it breaks things in a major way, this is a Bad Idea.
    --

  • I understand RedHat's dilemma, don't want to ship older versions of GCC after all, isn't every new version supposed to have an update?

    What would have been the cost for redhat to wait, say for the 2.4 kernel, kde 2.0 and a stable GCC? After all they have a stable distro out, and people seem to like it. I think the problem is that the money counters and stock watchers are getting nervous ("RedHat sales aren't what they used to be *panic* SELL SELL SELL!") Thus they produce a crappy distro, it says "NEW 7.0!" on it, so people will buy it (fools)

    The problem with being a publically held company in todays "new economy" is that stock market investors seem to be incredibly short sighted. Investors only care about making a buck from a move in stock prices. When CEO's and CIO's only worry about stock performance and don't look out for the long term growth of an organization then the company is in deep doo doo.

    RedHat should have waited, a more stable solution would have been a better image enhancer, better for the company long term (don't have to issue so many bug fixes) Instead they chose to carry on the great tradition that every RedHat .0 distro is sub par.

  • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Friday October 06, 2000 @04:23PM (#724682)
    From what I heard, RedHat thought that the gcc 2.96 snapshot they got would be library compatable with gcc 3.0. They didn't want to ship 2.95.2 because it is a dead-end branch that isn't compatable with either 3.0 or egcs. They didn't want to ship egcs because stuff like kde2 won't compile with it. They truly stated that they *really* wanted to avoid breaking the C++ ABI twice (once going to 2.95.2 and once going to 3.0)

    So, they were caught between a rock and a hard place... The didn't have any "good" compiler to ship, so they did the best they could and shipped an older egcs release as "kgcc". They could have postponed their release (like Linus just did for 2.4), but who knows how long before 3.0 is released? Also, they have a lot of customers without requirements for a specific version of the compiler... Why make those guys wait for the GCC guys to get 3.0 finished?. As it stands 2.96 compiles most C and C++ code correctly (or as well as any other version of GCC), and kgcc compiles the kernel. It Seem To Work(tm) for most people, if they have correct C++ code, and use kgcc for the kernel.

    That said, there have been a *lot* of problems reported with RH7, though many come from not RTFM ("I can't compile my own kernels!"). They probably should have delayed shipment for more QA, but not necessarily waited for a new compiler.
  • by kevin lyda ( 4803 ) on Sunday October 08, 2000 @06:07AM (#724684) Homepage
    yes.

    i use redhat because it lets me get my work done. as a developer i don't have oodles of time to tweak configs and build every package from scratch. i have done it in the past - for both linux and sunos. it was fun. i used to find rattles fun too.

    at the moment i'm more interested in getting my personal and work projects developed and running. that involves c, perl, php3, perl modules and i'm hoping someday to find a use for the Inline::C modules... ah...

    anyway, quite the funny/snarky comment, but in relaity many people find redhat a great base for doing real development. therefore they need a good c compiler.
  • Like Joe, I have used GCC since the early days, and 2.95.2 is by far the best and most stable release.

    What Red Hat should have done, was to release their own version under a different label, like they (the Cygnus part of the company) earlier have released their own GCC versions under their "GNUPro" label.

  • is actually a common practice. Sun for a long time shipped a separate cc that they supported only for building the kernel (this was before dynamic loaded kernel modules). I suspect most of the closed source OS companies also use older versions of the compiler.

    What you have to realise is that kernel code is very low level, and often relies on features and missing optimizations that would never affact application code.

    While one should of course fix the kernel (or provide hooks in the compiler) to make them work together, holding back the release of one for the sake of the other is rarely a good idea.
  • 2.95.2 is armed and dangerous with optimizations on. I've seen it completely remove reverse-counting for loops with substantial bodies (ie, for (i=7; i>0; i++) { guts_of_application(); } disappeared entirely from the generated code). And this is on x86, the supposedly most stable target.
  • by Per Abrahamsen ( 1397 ) on Saturday October 07, 2000 @02:09AM (#724695) Homepage
    "I may disagree with what you have to say, but I shall defend, to the death, your right to say it." - often attributed to Voltaire

    The FSF exists to protect the right of Red Hat to do exactly what they have done. But that doesn't mean they (or the Steering Committe, which are managing GCC on behalf of the FSF) have to agree with it. There is an important difference between saying "I forbid you to do that" and saying "You are allowed to do that, but I believe you should not do it anyway, and here is why..."

  • You can't have a stable ABI when the language itself is still evolving. GCC 3.0 will be the first GCC version implementing the final C++ standard (including the standard library), and it will be the first GCC versions to promise a stable ABI.

  • Show me where in the all-holy GPL Red Hat is prohibited or discouraged from releasing their distro with a development grade piece of code.
    Of course they can. This whole discussion is whether they should.
    If those people happen to be the people at Red Hat, and they decide to put an unfinished product in their distribution, then it's their business... literally. If people don't like it, then they sure as Hell won't use it.

    The trouble is that this may not make RedHat's life any more difficult, it certainly makes the job of anybody trying to ship "Linux binaries" (well, for C++ only, but the point still remains) considerably more difficult, and could conceivably encourage "Red Hat only" products to be shipped, which is the kind of stunt that we get annoyed with closed-source companies pulling. I'm not saying that this was their goal (in fact, I'm sure it wasn't), only that if they were trying to pull such a trick shipping a compiler generating non-standard binaries is one way go to about things.

    Secondly, and more importantly, people are still going to complain to the gcc mailing lists about bugs in the gcc shipped with RedHat, when it's not a release that the gcc developers were prepared to stand behind, and the gcc developers will probably go nuts generating the same replies to the same problems that weren't in the stable release, aren't in the current development tree, but were in the particular snapshot that RedHat decided to use. While they might want to say "rack off and complain to RedHat" they almost certainly won't because they care about users and the good name of their product (even if it's not really theirs).

    In essence, RedHat has to realize (and they probably do) that their actions affect the whole community, and their continued good name depends on them acting responsibly. Look, there may well have been compelling reasons for shipping the non-standard compiler, I'm not really qualified to comment. However, it's not an action they should have taken lightly, and it seems like they could have handled relations with the gcc developers better.

  • As one of those "rare people" who have the technical knowledge to understand a broad range of computer/information-related ideas/tasks/problems/solutions, and who also has some good managerial/political skills, I'd like to say that I agree with you in principle.

    However, it's as easy as you make it seem. I've learned quite a few things, and one of them is that most people participate in active stereotyping to some degree or another. In my experience, the more of a "fringe" group you are(translated: the more specialized your proffesion or way of life), the more you stereotype other people. I've suffered from this myself, and at least for me it stemmed from the fact that most people quickly and effortlessly put me into a stereotype, often with confidence-shattering results.

    What I'm trying to say is that a lot of managerial-types I've dealt with need things explained to them not only in terms they understand, but also from someone who they think is a member of their own stereotype.

    On the other hand, most techies I've dealt with also need things explained to them in terms they understand(and, well, let's be honest - everyone needs that for the most part), but they also need to hear it from someone they feel is a member of their particular stereotype.

    This is a very difficult thing to accomplish. In one particular job, I did fairly well. I'm a pretty good actor, and had two different personalities/vocabularies/mannerism-sets to use with the two different groups of people. I thought it would be a great idea if everyone got together to hammer out some issues, and everything fell apart.

    The managerial types saw that I related well with the techies, and they immediately got their hackles up. (after talking to a few of them, they said they had felt betrayed - I had put them on. In reality, that's exactly what I did, because it was what I needed to do to get the job done) The techies say the way I talked with the managerial staff and felt that I was really sort of a "spy". Someone who was actually management, trying to horn in on them. Fact is, both groups had taken my suggestions well, and we had implemented what we could. Compromises were quickly and easily reached, but as soon as everyone saw that I wasn't part of their stereotype, they didn't trust me. I really don't think there's much that can be done about it. You NEED to be able to talk to people in a way they'll understand. And, if you honestly sympathise with them, you shouldn't hide it. In this particular case, I was pretty much screwed. Luckily the project was finished and I was able to move on, but I'll never put myself in that situation again.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Friday October 06, 2000 @04:29PM (#724703)
    It works fine, with a few caveats (e.g., std::stringstream). You can alway use STLPort or libstdc++v3 anyway.

    Also, that C code is probably broken code that gcc 2.7.x just happend to accept.
  • by tytso ( 63275 ) on Friday October 06, 2000 @04:36PM (#724706) Homepage

    The problem with the strategy which RedHat has embarked upon is that RedHat has always committed to keeping full binary compatibility during a particular major release series. So there was full binary compatibility between 5.0, 5.1, and 5.2, and between 6.0, 6.1, 6.2.

    By prematurely going to a pre-release "GCC 2.96" which will not compatible with the eventual GCC 3.0, it will force Red Hat to continue to maintain the same random development snapshot through the entire 7.x series --- which if past history is any guide, might be a year or two, even if GCC 3.0 by some miracle ships in a few months.

    Worse yet, it puts the other distributions in one heck of an interesting dilemma. Do they follow RedHat and use the same non-GCC supported compiler? Or do they use GCC 2.95, and then be incompatible with binaries compiled for RedHat 7.0 that happen to use C++ and shared libraries?

    As we have seen in the past, the Red Hat marketroids have tried very hard to pursuade ISV's to make binaries available for Red Hat, and they've tried very hard to get the rest of the industry to believe that Red Hat === Linux. I can't necessarily fault them for that; they have a fiduciary responsibility to their shareholders, and such microsoft-like tactics are the best way to build the RedHat brand. After all, at the recent open-source conference, Michael Tiemann boasted, "The Linux distribution game is over. Red Hat has won that game. Red Hat is the market leader in virtually every respect." (I suppose this ignores SuSe in Europe, or TurboLinux in Asia, but whatever.)

    So ultimately, having them choose something with a non-standard ABI that's not going to be supported by the Open Source project, even if it is only for C++, is quite troubling.

  • C++ name munging will change between gcc 2.96 and gcc 3.0. So, any program which links against a C++ library will get unresolved symbols if it can't find a version of said library that was compiled with the old compiler.

    RH always includes compatability libraries for these situations, though they aren't always installed by default (if you are lazy: rpm -Uvh /redhat/i386/RedHat/RPMS/*compat* usually works). Thus an rpm built on RH 6.2 should work on RH 7, but not the other way around (since RH6.2 won't have the 7.0 libraries).

    If you always build on 6.2, or you don't use C++, or you don't linke against C++ .so's, you will almost certainly be fine.

    If your code doesn't conform to the C++ standard, people may not be able to compile it on RH 7, since that compiler is more strict than older versions. Theoretically no valid C++ programs that compiled under egcs or gcc 2.95 should break, but there may be more bugs in this snapshot.
  • Kai C++'s sales brochures say the export keyword is supported. Don't know for a fact, though, since I've never used Kai C++. Seriously considering buying a copy, though, if it supports templates any better than egcs does.

    Let's not even get into the STL implementation in egcs. When I can't use at() on an STL vector, I get deeply annoyed. Admittedly, it's a trivial thing to fix, but there are things like that all over in egcs--things which ought to be fixed, things which are trivial to fix, but which, for reasons unknown to me, aren't fixed.
  • come to the PetrOS Forum [petros-project.com] and find out :)

    I have rigid deadlines that have to be met, so it does look like Q4 2000-Q1 2001 will be the release date - hope I don't burst my boiler doing so :)

    The first target (fully functional kernel capable of providing base win32 kernel services) has been reached without blowing out by more than 3 months. The next target is a GUI which could effectively run a thin client or bare bones windows apps - this is about 30% done and I hope to achieve this by the end of the year. The graphics framework is running, and all I need to do is build the widgits and glue code to interface to applications and we're pretty much there.

  • Oh bull. You don't have to pay $9.95/mo for updates. You go download them for free just like you always have.

    You're probably confused (likely an understatement) with the $19.95 service of getting update CDs shipped to your door monthly.
    --

  • by joss ( 1346 )
    The most popular language (in terms of number of programmers) is VB

    The most popular language (in terms of active lines of code still in use) is cobol

    The most popular language (in terms of useful lines of code still in use) is C

    The most popular language (in terms of useful lines of code being written) is probably C++

    The most popular language (in terms of brand new projects) is Java
  • Yep! All the time.

    rpm -ivh something.src.rpm

    hack hack hack the specfile

    rpm -ba something.spec

    Whee! Just like tarballs, but with sprinkles!
  • by Kostya ( 1146 ) on Friday October 06, 2000 @08:55PM (#724728) Homepage Journal

    Stop and think. Why would a distribution ship a compiler that is off the development branch? Why in God's name would they ever do anything so incredibly horrible as that?

    Simple: features. Many of you may be C programmers, so you may be scratching your head right now, saying "Huh? What features?" But if you work with C++ (and you will notice it mentioned specifically in the post from the GCC team), you know what features RedHat was trying to get by going with the supposed "2.96".

    Now, before someone goes off all half-cocked and starts bitching about C++ and C being better, go read up on Inti [redhat.com], specifically on the why's. RedHat will make its money either directly or indirectly from its distribution--the more people who buy Linux, the more ways for them to make money. More people will use Linux if there is software. More software will "appear" if companies use Linux as a development platform.

    From what I have read about Inti and its reasons for being started, companies are passing on Linux because they cannot get good enough C++ support and tools in Linux. That's right--C++ support is hurting Linux.

    Yes, yes--C++ sucks, yadda, yadda, yadda. Stop and think. The reason so many of you turn your noses up at companies and work is because they pretty much require you to use C++ anymore. I'm not saying it is fair. I'm not saying it is necessarily a "good thing". I'm just raising the issue--most development shops, if given a choice between C and C++ use C++. Perhaps this is all Microsoft's fault. I have no idea.

    But I happen to use C++. And let me tell you, writing C++ with the gcc/g++ is a royal pain in the ass. The ANSI standard has been out, and many vendors are still getting their stuff together. But the gcc is one of the lagging ones (IMO--I may be wrong, please list any worse offenders). You buy the C++ Reference by Stroustrap and try to follow the examples (try using sstream with anyone besides RH 7.0, and you will see what I mean)--endless amounts of frustration.

    RedHat is just sooooo evil. They included a compiler that had better C++ support. Before you demonize them, stop and try and see it from their perspective. I for one, appreciate a better STL implementation--the one in 6.2 was just aweful. 6.2 had a C++ library with a build date of February 2000. Now that may *sound* new, but it isn't. It isn't even close to the current standard in some areas.

    All that being said, I'm not sure that the "negatives" that come with 2.96 were worth it. I would have rather seen RedHat include a good STL port (I hear there are a few good projects out there). Still, the C++ support found in current Linux distribution sucks. Just join any linux C++ project mailing list and see how many times gcc bugs will come up :-(

    And if you think this is flamebait, you best check your moderator rules.

  • I also work for a large blue computer company. When I joined one of the Linux teams, I was told I wasn't allowed to use Slackware on customer machines. This was due to 'no vendor/support strategy'. Same with *BSD. However, I'll continue to use Slackware everywhere else, since I haven't seen a bad Slackware release yet.

    For those of you who think I'm bashing Redhat, take a seat. I'm also an RHCE, and I've used RedHat for years as well.
  • Bleeding Edge Technology held as number one suspect. GCC Steering Committee denies all responsibility! More to follow!!

    Wohhoo... things are getting interesting again on slashdot after a bit of a slow summer!!!
  • As I understand it, the complaint is that Red Hat released a compiler that isn't intended to be released; however, didn't change the bug-reporting email address. So, rather than Red Hat shooting itself in the foot, it's presenting a bad appearance of the GCC Steering Committee: "Thanks for upgrading to Red Hat 7.0. Here's broken compiler. Send complaints to WhyIsGccBroken@gnu.org". It's just bad form. The steering committee knows that there are many bugs, and many of them can't be fixed until the new C++ RTL is available. So, you end up with a lot of upset users pointing fingers in the wrong direction out of frustration. When people download/buy a released piece of software, they expect it to run reasonably well. They knowingly shipped a broken piece of software. While that's not terribly uncommon these days, it's bad form to give customers a gripe-email address that doesn't belong to you.
  • You all keep posting your derisive comments about Red Hat's latest release, 7.0. Our Friend, Bero. Let me tell you a little story about this "Bleeding Edge, Broken Release," as you like to call it...

    Bero was walking down the street in some hick town in Oklahoma. It was a brisk Autumn day and Bero was shivering in his jean jacket as he contemplated his lot.

    "Why don't people like me? Why am I so unpopular with those cool guys on Slashdot?"

    As Bero walked, lost in his thoughts, a tall, thin man approached. The man had a face that was badly scarred by acne and a long, black moustache that glistened with natural oils. The man stumbled into Bero.

    "Oh, I'm terribly sorry," the man said as he lifted Bero from the sidewalk, "I didn't see you coming!"

    "It's ok," Bero said, on the verge of tears, "I didn't see you coming either."

    The man brushed the dust and horse manure off of Bero and looked at his sullen face, "my god! You, sir, are a star waiting to be born! You must come with me to Hollywood, where I will launch you on a stellar acting career and land you guest appearances on Oprah!"

    Bero was shocked, "yes."

    Bero went to Hollywood with Mr. Garcia, who had just quit the used car business to become a big-city talent scout. Using elicit substances, he landed Bero a roll in the Upcoming "Star Wars: Episode II" movie.

    Bero auditioned many times before landing the role, pulling up his deepest feelings of rejection, which had been heaped upon him on Slashdot. Finally, Bero's life was about to change.

    Bero sat in his chair, which displayed his name with a single star above it. He watched the crew prepare for the next shot, in which Bero was to portray a Mexican fetal sloth. He noticed a nubile figure approaching from the fake mist...

    Bero did not recognize the pouting teen breasts and firm teen buttocks that bounced his way. The long, flowing, silken hair stirred no hormonal reaction. Natalie approached Bero, smiling coyly.

    "Hello, Bero. My name is Natalie. I am hot and young and an actress!"

    "Hello, Natalie. I am Bero. I am a lowly point-oh release. I will be replaced with superior products. I am broken and unstable."

    Natalie put her tender hand tenderly on Bero's tender chest, "You are like a cottonwood seed, caught by the March winds and blown into my back yard, where you land on my face and tickle my cute teen nose!"

    Bero's heart melted. He, at last, knew bliss!

    "Let us run away together, Bero! Let us forget the bright lights and plastic faces of Hollywood! Let us forget the wandering sheep of Slashdot! Let us merge together and make our own point release!"

    Bero leapt from his chair and clung to Natalie's neck. They laughed heartily as the walked off the set.

    Bero and Natalie sat on the beach of Cancoon, absorbing the life-giving rays of sunshine. Natalie caressed Bero's shock of hair, "I have a 4 gigabyte SCSI (pronounced 'sexy') drive I'd like to install YOU on!"

  • Binary compatability with other distributions is much less important. You can't install a .deb on RH, you can't really install a SuSE .rpm, and these days you push your luck with a mdk rpm anyway.

    It isn't really that hard to recompile OSS code for a different distribution, anyway. Comercial software companies, I guess are SOL until most distributions have a gcc 3.0 blessed set of libraries, so they will probably have to statically link against libstdc++ if they want portability.

    RH does ship compatabilitiy libraries whenever a new release breaks binary compatability, though I don't know if they tricks they play will help out a binary compiled on a different distribution.
  • In my opinion, Open source is going to show more and more of these problems because of the conflict of interest between open development and maintaining commercial advantage to remain competitive.

    The fundamental issue is that if you are going market an operating system, you need to have a well defined and controlled environment for building the OS, especially the compilers and linkers. In fact, the very first part of the PetrOS project was to create a compiler capable of building the kernel from, which has proved fruitful because the kernel is extremely stable - I know exactly what machine codes are executing at any point in the kernel. When you are left to a 3rd party compiler, you are at the mercy of the compiler developers interpretation of how the language should be implemented, and even suffer the bugs they may have left in the distribution. I am only too familiar with that aspect.

    The other issue that open source developers face is the frequent version releases, some perhaps not fit for public consumption. Clearly in this case, the gcc people should have made their version numbering scheme represent the beta nature of the product. Heck, even we do that with Trumpet Winsock. Before we go to major version release, we tag the version number with a beta sub version number to clearly indicate that the product won't be supported and that it should not be used for any production implementations or distributions.

    It is for precisely these reasons that I have opted not to endorse an open source model for the PetrOS project, but rather some form of synthesis whereby key parts of the distribution are kept closed source, but allowing some of the outer edges of the project to be open source. I believe this is the only way for complicated projects like an operating system.

    The other comments I hear (hearsay???) about Red Hat being closed shop about their plans hints of corporatism.

    A word of advice to all those penguins out there. Beware the corporate world - it's ruthless and they'll take every advantage of the open source movement, even to it's detriment!!! I'm certainly not saying that Red Hat are doing this, but sooner or later, some company who wants to cash in on the Open Source movement will come in and plunder all the good work that's been done. Perhaps this incident is a small warning of what *could* happen. It's happened before in other parts of society - eventually "feel good" movements get exploited in one way or another either politically or economically.

    I could say more, but it really deserves a much better appraisal than an off the cuff comment on a forum.

    Don't get me wrong, I am in no way denigrating Open Source - I'm just saying that trying to operate it on a commercial basis is full of problems.
  • gcc 2.96 generates binary incompatible C code.
  • It is clearly marked as a snapshot(the version string says "Red Hat Linux 7.0"), and it is currently the best compiler for our needs.

    We're obviouly not going to issue a "blessed" errata unless it is fully compatible - we care about our product and and compatibility, having multiple releases of compilers etc. is completely unmaintainable and might confuse people developing on it.

    Anyway, this discussion is at a dead end. We released a fixed snapshot, we're not doing anything wrong with this (we should have told the gcc steering committee, but technically it was the right decision) and we're not going to break compatibility with it. EOD.

  • The C++ compatibility issues are far less important - as C++ has never been compatible anyway.

    Right on. I'm still using Pinstripe beta (so shoot me), but the most I've noticed is that Blackbox wouldn't compile until I fixed some bad code of theirs. Other than that, 2.9.6 works absolutely great!

  • Let's not even get into the STL implementation in egcs. When I can't use at() on an STL vector, I get deeply annoyed. Admittedly, it's a trivial thing to fix, but there are things like that all over in egcs--things which ought to be fixed, things which are trivial to fix, but which, for reasons unknown to me, aren't fixed.

    That's true, but there are replacement library implementations available, such as STLPort. Oh, and what templates does Kai C++ support than gcc does not? (2.95.2)

  • Re: STLPort

    I agree that with "aftermarket add-ons" you can get reasonable performance and compliance from egcs. I object to needing to do it in the first place. I'm saying egcs is pretty broken, out-of-the-box, and requires me to find my own solutions to its shortcomings. (I missed at() enough that I wrote my own derived class from vector to handle range checking, for instance--only about 20 lines of code, including the class definition.)

    Re: Kai C++

    Insofar as other parts of the standard which Kai C++ supports better than egcs, I'm not totally sure. All I've seen are the sales brochures, which I'm naturally skeptical of. Remember that I've never used Kai C++.
  • The fucking compiler is *fine*. It supports the fucking core language better than nearly any other compiler in existence. True, its standard library is missing a few things, BUT you can download stlport or libstdc++v3 to fix that.

    Check your facts, troll.
  • Non-biased does not mean "prefer Red Hat", but it would mean not making comments like "recovering from Red Hat Linux 7" on many stories, and if someone submits story which seems negative, actually do some checking (of course, this should be done nonetheless, but it doesn't seem to)

    The sites I feel most interesting these days are LWN [lwn.net] and Linux Today [linuxtoday.com]

  • As for speaking to the rest of the authors, this was a miscommunication internally (I would estimate that most come from Red Hat anyway, from Jakub Jelinek in OS Engineering and former Cygnus)

    "shipping a snapshot": This was cut of the tree a long time before shipping and then QAed and fixed. Us wantin to ship it got it huge amounts of testing and bugfixing, which would accelerate release of GCC 3.0.

    As for KDE2, I just stated that preannouncing features of a release is bad in case it turns out not to be released when planned (2.4 kernel, KDE2 and others)

  • I don't really care too much about the binary incompatibility problem. It's a pain for libraries, but there are not yet a lot of widely used C++ libraries that I know of.

    I do care very much about my compiler breaking. I have a C++ project I've been working on for a long time, and I remember the bad old days of the 2.6.x series. *shudder* I was having to #ifdef all over to make code that closely followed what then passed for the standards. And I even purposely avoided some of the newer features like templates and exceptions.

    It's very painful to work around compiler breaks. Especially when you code like I do and the vast majority of your code is so cross-platform that you don't even need #ifdefs to get it to work for NT.

  • by teg ( 97890 ) on Saturday October 07, 2000 @07:16AM (#724759)

    Kernel development aren't the people doing the distribution - OS Engineering is. The kernel is only one (but major) part of the system,

    Backing out major items liks glibc and the compiler late would be very hard after making the entire distribution on it (and if late enough, not enough testing either). Also, integrating these for all platforms (including IA64) would be next to impossible - the current IA64 toolchain look a lot like the one in Red Hat Linux 7, and this isn't a coincidence. Add to that that we commit to binary compatibility for many releases, and needed better i18n support for the Japanese product. We would definitely prefer not to ship pre-releases (although heavily QAed and fixed), but we felt we didn't have much choice.

  • You just don't get it. You can not defend the actions your company has taken. You NEVER EVER release a snapshot of GCC. This is fundamentally the most important piece of the whole system. Also, it is quite apparent if you (RedHat) want to do the right thing you will release an update to replace the current GCC for RH7 with a version of GCC that is blessed by the GCC community as a valid version. I hope you (RedHat) realize that the longer you wait before fixing this issue the more your user base will lose respect for your product. Come on RedHat do the right thing. I feel an imbalance in the force...no that was redhat not thinking...
  • I expect this to change in the next 5 years. C++ is starting to be stable and well understood enough that compiler vendors can start standardizing on calling conventions and such.

    But, it will still have the 'fragile base class' problem. That one's pretty hard to fix. *sigh* *think*

  • Hear here! This has been my experience with gcc-2.95.2. I think its optimization could be better, and it would be nice to have support for the K6 and K7, but other than that, it's been very stable and reliable for me. I do a _lot_ of C++ coding.

  • by nihilogos ( 87025 ) on Friday October 06, 2000 @05:35PM (#724768)
    Do Red Hat users every actually compile anything anyway?

    (boy is this going to ruin my karma)
  • I wonder if anybody has read the following, yet?

    http://lwn.net/2000/1005/a/rh-tools.php 3 [lwn.net]

    This explains the whole story - looks like the left hand of the GCC crew doesn't always know what the right hand is doing.

    Anybody who's been following LWN will have been aware of this for several days now.

    It seems to me that RH had to make an ugly compromise, and just bit the bullet.
  • by 1010011010 ( 53039 ) on Friday October 06, 2000 @05:36PM (#724770) Homepage
    Oh, that's just silly. And, if you want to make it an example of "capitalism in action," include the part about their customers getting pissed, and refusing to upgrade and/or switching to other distros or even other OSes. It's pretty silly to take one specific bad thing and attribute it to a system-wide socioeconomic fault.

    ________________________________________
  • And since we then have to keep the compiler though the 7.x series:
    • egcs 1.1.x was hardly an option,
    • 2.95.2 is buggy (especially on non-x86)
    • having it not differ too much from the current IA64 tools is also important (this is also part of our buildtrees, and the same package needs to build on many platforms
    • binary compatibility with C++ has always been a horrible mess

    As far as compatibility goes, glibc is the great challenge and C++ not that important: When binary compatibity is a high priority goal, C++ has never been an option when developing for Linux.

  • I recently upgraded to 7.0 from 6.2. What a disaster. I spent a weekend patching it, and then finally backed up my data in reinstalled 7.0 from scratch. I was then greeted by a total inability to compile 2.2.17 with the gcc in RH 7 (2.96). I submitted a bug report ("compiler screwed in 7.0") and was told that it's not a bug, that I should use kgcc to compile kernels. Of course, this wasn't documented anywhere, and in spite of the fact that I chose "kernel development" in the installer, it did not install kgcc. I had to go get the CDs and install it. When I ran kgcc -v, i saw that it was gcc (egcs) 2.91.66, which is a working compiler. So i symlinked cc and gcc to kgcc and everything seems to be fine. I asked on the same bug why they ship a broken compiler and require manual isntallation of a working one in RH7. I was told that it's the kernel's fault, read the LKML. I think my question is still valid -- why did they ship with a broken compiler? Granted, the kernel has special facilities in their makefiles for using kgcc rather than gcc, but that doesn't solve the problem for kernel modules compiled outside the kernel tree. And I still had to hand-edit the source for the Universal Tun Driver to get it to compile right on RH7, even using "kgcc".

    What's the big advantage of 2.96? I haven't seen it yet, so if someone could please explain it to me...

    ________________________________________
  • by teg ( 97890 ) on Friday October 06, 2000 @05:53PM (#724775)

    Dunno, i don't agree with using snapshot's in distrobution... i find that just wrong, imho

    This is just "a snapshot of the day" - this was a snapshot from a good time before we went gold, and after that we spent lots of time QAing it and fixing it.

  • When they released 6.0 and broke so many things.

    Well, this RHCE is now quite frustrated. Looks like the Kickstart images I made are going to have to be modified to remove the gcc 2.96 packages and install 2.95.2.

    Dreaming of a RedHat X.0 release without a HUGE b0rkeness in it.
    -Rusty
  • I've been working on a port to Itanium using Intel's simulator. The gcc included with it is picky. So picky that I think there is a bug (with regard to const strings and the ?: ternary operator). But I haven't been able to find a mailing list or anything to send my questions to. Who who who??
    --
    An abstained vote is a vote for Bush and Gore.
  • by Pflipp ( 130638 ) on Saturday October 07, 2000 @08:46AM (#724779)
    When talking about what came first: the chicken or the egg, GCC is definitely the egg and GNU is the chicken. (See also the picture on gcc.gnu.org .)

    Without GCC, there wouldn't be much Free Software around. Even non-GNUish Open Source projects (most namely BSD) use GCC. Even interpreted languages use GCC, as their interpreter is often compiled :-) , etc.

    GNU software with a >= 1.0 version number is usually well-thought-of and extremely reliable. To name a simple instance, I think that for the most UNIX utilities, the GNU versions just work better than e.g. Solaris or BSD versions. GNU tar's -z option rocks, as does the option parse system of e.g. ls , so that you can write ls foo -l as well. Bash is also an example of a solid piece of GNU software.

    You'd think that these FSF "rock solid" GNU folks would be more careful about the "egg" of their project, GCC. But what do I hear here? Buggy "stable" releases, binary incompatibilities between minor releases which are only resolved by incompatible fixes for a new major release... It's kind of a mess.

    I've also got the impression that with glibc the same issues arise, but I might be wrong.

    How can we have closed source vendors run to open source systems when things are like this?

    It's... It's...
  • 1. Mandrake 7.0 ships with a messed-up linker. Red Hat 7.0 ships with a compiler that's not even officially released. What's going on? Don't these companies know how important this stuff is (primarily when you've often got to 'make' your own binaries)? 2. I guess they could plead ignorance, but doesn't Red Hat have Cygnus in-house? Don't any of these people talk to each other? 3. Why do I have the sneaking suspicion that somewhere along the way this was a management decision? "Make sure to give it all the bells and whistles, guys! By the way, have we started work on the first service pack yet?"
  • I certainly hope so... with any luck, gcc 3.0 will go some way towards that goal.
  • by SuperDee ( 14231 ) on Friday October 06, 2000 @03:41PM (#724787)
    Well folks, this clearly does make matters more difficult, both for the GCC Steering Committee, which now has to deal with the repurcussions of Red Hat's decision, and for people who use "2.96", which will not be binary-compatible with 2.95.2 or upcoming 3.0.

    However, I don't think choosing which GCC version to use was this simple a matter for Red Hat. After all, they needed to maintain compatibility with the Linux kernel on the one hand, and have better I18N support on the other. The reason Red Hat included "2.96" was because they desperately wanted better I18N support... I'll bet it was probably because they are trying to compete on the international front, most particularly with GUESS WHO (hint: SuSE and TurboLinux). Heck, I'll admit I'd have a hard time deciding on this too, especially when it could mean $$$ for a company which has yet to make a profit. Then there was "KGCC", because after all, who wants a distro that has a non-working kernel?!?

    I do think Red Hat has been a bit too eager to include bleeding-edge packages in its distros, but in some cases, including this one, it is NOT just done without careful consideration. I think they should be cut just a little slack on this one.
  • Whether it was a good idea or bad remains to be determined, but at least from what I have seen as a C++ developer on linux, this will just make a bad situation worse.

    All C++ libraries destributed in rpm form need to have a specific dependency against the compiler. That is because we had 2 ABIs in common use. Egcs 1.1 and gcc 2.95.2. With this snapshot and the gcc 3.0, we will end up with 4 ABI. This will mean the very concept of distributing a C++ library will become a point of frustration. Just ask the distributors of gabber [sourceforge.net].

    I have produced two C++ libraries gtkmm [sourceforge.net] and sigc [sourceforge.net]. With both I get a constant stream of complaints when someone takes an rpm from one build redhat and places it on another (mandrake). The result is a bug reported to me that my stuff if broken when if is the linker not saying the ABI is wrong.

    Constant problems with the ABI (now 3 floating around and one more on the way), will mean the C++ programmers even spend more time fighing problems they don't understand. The result will be more people pissed off with C++ and linux. In case you think the gcc steering can just keep the gcc 2.96 ABI, then they will also have to keep a number of outstanding bug reports which are pending further changes in the ABI. My libraries are slightly cripple with the use of dynamic cast because of those long standing problems.

    --Karl

  • by dvdeug ( 5033 ) <dvdeug&email,ro> on Friday October 06, 2000 @03:42PM (#724790)
    Try the gcc webpage - gcc.gnu.org
  • Bah! Icky CORBA.

    Why not just define decent abstract base classes use them, and be done with it. But, I suppose that wouldn't work if you had to change the interface. *sigh* I hate CORBA. It's an even uglier mess than the C++ ABI, and it lets people believe they're making functions calls when they're really sending network messages.

  • According to the GCC release timeline [gnu.org], there has not been any official releases since 2.95.2 on 24 October 1999. Major releases appear to be released once a year, so the next one should be due any time now. However, the lack of any minor releases in nearly a year - rather than every three to six months - gives the impression that development has stalled. While I'm sure GCC 3.0 will be as great as previous major releases, it's little comfort for those just want small, minor improvements to the current version.
  • Show me where in the all-holy GPL Red Hat is prohibited or discouraged from releasing their distro with a development grade piece of code.


    If they want to stick discontinued, Pre-Alpha, Alpha, or Beta code in their final product, then by all means let them do it. The code IS freely available, right? It's freely available so people can use it, right? If those people happen to be the people at Red Hat, and they decide to put an unfinished product in their distribution, then it's their business... literally. If people don't like it, then they sure as Hell won't use it. Red Hat's commercial. They're free to make their mistakes all by themselves. Deal with it.

  • by teg ( 97890 ) on Friday October 06, 2000 @05:59PM (#724801)
    We don't want to preannounce releases, features publically - if it turns out we can't deliver (like if we had promised KDE2 at some point it had looked like it would be ready), many will be disappointed and we will be attacked for FUDing and preannouncing the way Microsoft usually does. So we have internal policies of being careful. This does not mean not to speak to authors, of course - it's not like they would run of to some cheap tabloid, like say "Slashdot" (which seems to have some Red Hat-bashing in and "oh, I love Debian" in every other article, which never seems to check a story for accuracy (not even if it has been published) or have any credibility left)
  • Working with RedHat everyday I get to see all the wonderful things that they break....

    A really good one is the damn -n switch for ping on 6.2. You wouldn't believe how many times I thought I had network or host issues when the host really just didn't reverse dns. That was and is a stupid thing to do.

    They have also been really good about shipping packages taht are either way too bleeding edge.

  • by leereyno ( 32197 ) on Friday October 06, 2000 @05:17PM (#724803) Homepage Journal
    It would be really nice if there were more people who understood management issues as well as technical issues, who could swim in both ponds. Put such a person in charge and you could be sure that important issues from both the management/business/marketing side of things and the technical side of things would each be properly dealt with.

    As it is now you've either got a technical genius who knows nothing about how to run a business, or a manager who knows nothing about technical issues. People who are a little of both are rare. Bill Gates is the obvious exception to this rule. Sometimes it isn't that bad of a problem if the mangager is willing to listen to the people who understand technical issues. Someone who is smart and wise enough to understand what it is that they don't know and listen to those who do know those things is always an asset. But when you've got a manager who due to some psychological or emotional malfunction is unable or unwilling to listen to others, then you've got big problems.

    A company which can't attract and keep good customers because their product's quality has declined or is no longer competitive will itself decline. In that situation even the best manager can do nothing more than delay the inevitable. When management isn't willing to listen to the people upon whom their profits truly depend, those managers are maiming and sometimes murdering the company they work for. Ion Storm, John Romero's new company and producer of the not so thrilling Dikatana, is the perfect example. From what I hear they had a psychotic in there running the show and of course causing such huge upsets that half the developers walked on the same day.

    I think this is the reason why CIS degrees are popular. The idea being that a graduate of such a program would be skilled in management and aware enough of technical issues to be able to make decisions. The problem is that CIS majors don't learn much on the technical side of things. I work at a university in the college of business so I have some concept of what they are studying. Imagine a handful of elementary programming classes in C++ and Visual Basic in addition to some database stuff? The rest of the degree is all business related. A person like this may be more dangerous than a clueless manager because they might remember just enough from their classes to be truly dangerous, especially if they think those classes qualify them as an expert.

    I don't know what exactly is going on at Redhat. You would think that simple testing would prove to even the most pointy-haired of managers that 2.96 simply didn't work right. Things like this don't happen by accident. Either there was woefully insufficient testing, or the results of the tests were ignored. Sometimes people with good track records mess up. If that is the case here then very little needs to be done other than make sure the person or persons responsible understand where they made a mistake. If however this is due to someone who is a screw up or causes problems, get rid of them.

    Creating a company is not easy. Finding the best people you can find and constructing a work environment and system that maximizes the ability of each person to do their job can be tricky. But it can be done by people who know how. Such people cannot be deluded about their own self importance. They must understand that they are the grease that coats the gears to allow the real work to be done. The moment they forget this and begin to think that, due to their usually higher salary, that they are a gear is when the friction will begin to mount.

    Lee Reynolds
  • ... since I haven't seen a bad Slackware release yet.

    You obviously never used the Slackware '96 release. Among other things, when I tried to install it:

    - It kept trying to write the boot loader to the CD-ROM drive

    - It tried to eject the hard drive at the end of the install

    - It didn't create the "/dev/console" device

    All in all, I wasn't overly impressed.
  • I use it daily. And I've been stung by misoptimizations enough that I no longer even bother trying anything beyond -O1 on critical code. And this is on i386. RedHat releases on Alpha, which you admit "wasn't great." A lot of Alpha issues got solved post-2.95.2. I have no inside knowledge of RedHat, but this might have weighed fairly heavily in their decision.

    I've used GCC since 1.X days, and it used to be that it was the equivalent or better of almost any vendor's compiler. That's no longer true. Where I work, GCC 2.95.2 has such a bad rep for miscompiling or crashing that nearly all production code is compiled with EGCS 1.1.2 -- or GCC 2.7.2.3. We might just be unlucky, or perhaps GCC 2.95.2 has more problems when dealing with legacy C++. But the latter would still be a major defect in GCC.

    As to whether 2.95.2 is the "buggiest": I have experienced more grief from 2.95.2 than any earlier release. I have seen more people have problems with it than any earlier release. I've managed to avoided the 2.8 series, though, so I'll have to take back my claim of 2.95.2 being the "buggiest," since judging by its repulation 2.8 might well have been worse.

    As for what Debian uses, 2.95.2 is also the production compiler for FreeBSD 4.x (which is most often where I use it, though I do some development on Linux and Solaris as well). And unlike Debian, the FreeBSD folks are not at all happy with it.

    I agree that GCC testing has improved a lot; I've scanned the GCC developer's list from time to time, and I'm quite impressed at how much rigor has been added to the testing process, and how it has been made an integral part of development. This is probably why snapshots seem to be a lot more solid than they did early in the EGCS project.

    This is why 2.95.2 was an enormous (negative) surprise to me. I'd talked up EGCS and the EGCS development process from EGCS 1.0 on, and converted a lot of folks to using EGCS. I had even greater hopes for it when it was made the official GCC. The testing regime seemed promising, a lot of good developers (including yourself) were slaving away at making GCC better and better, so I fully expected something "better than EGCS." Perhaps calling it "the buggiest" is a bit of an overstatement, but I feel disappointed and a bit betrayed.

    -Ed
  • by Tuxedo Mask ( 100850 ) on Friday October 06, 2000 @06:15PM (#724808) Homepage
    But when different (GNU/)Linux distributions can't run each others' binaries, you have exactly the same situation the Linux company chiefs say they won't allow to happen: effective forking of Linux. Really, this is a bit much! Distributions already don't usually run each other's packages "out of the box," as it were. Not to mention that in an operating system which spans so many architectures, *partial* binary incompatibility is really not so great a concern!

    Although I understand you have a personal interest in this software, please keep in mind that it is GPLed. It is extremely disingenuous for you first to release it the public for general use, then to turn around and so harshly criticise someone for the crime of taking you up on your offer!
  • by Chalst ( 57653 ) on Friday October 06, 2000 @03:44PM (#724810) Homepage Journal
    The boring source of these problems is version numbering schemes that
    suggest that builds are actual successors to earlier releases. The
    Mozilla milestone approach, and the linux-kernel `test' and `pre'
    releases can help to avoid this.

    Still, it is exceptionally incompetent of Redhat to release a
    distribution based on code generated using an unfinshed compiler whose
    binaries are incompatible with existing official releases.

  • You didn't get your ia64 gcc version from the GCC steering committee, as there is no official ia64 release yet. So whatever you have is an early snapshot. Any ia64 gcc you can find will be a highly experimental, buggy piece of software. You might be able to get some help from the folks on the Linux IA-64 project [ia64linux.org].

  • C++ *is* working "super well" in 2.95.2, certainly better than it did in 2.7.x

    The only reason 2.7.x would accept code that 2.95.2 wouldn't would be because that code was illegal C++.
  • Wow. That's some serious anger/annoyance! :-)

    The current problem is that the gcc and the libstdc++ appear to be intertwined at the distribution level. As you will notice at the end of my post, I thought a good STL port would have been a better solution. Upgrading to libstdc++-v3 would be even better. I can't speak to whether it is possible or wise--I'll leave that to others more in the know.

    But since RedHat is trying to put out a whole package, they tried both the compiler and the libstdc++. Perhaps there are reasons we don't know about.

    The 2.96 compiler handles C++ differently, so I wonder if there are features in it that make it more ANSI compliant--and perhaps this why RedHat decided to go with it?

  • what percentage of software included in Redhat actually is an "official release version?" I can understand not wanting people to think your software is crap because someone else is distributing a bad version of it, but what a great opportunity to get eyeballs to shallow your bugs!
  • Just "interesting?" If Microsoft pulled a similar stunt, you guys would be up in arms. Face it, the activities of RedHat recently should really worry the OSS community.
  • by Chris Pimlott ( 16212 ) on Friday October 06, 2000 @03:52PM (#724820)
    It's not really fair to leave out Richard Henderson's explaination [lwn.net] on linux-kernel...

    Basically he says 2.95 isn't that great on non-x86 platforms and their version is much better, and that 2.95 already has incompatibilities between egcs 1.1 and the future gcc 3.0 so it doesn't make a big difference.
  • and they've tried very hard to get the rest of the industry to believe that Red Hat === Linux. I can't necessarily fault them for that; they have a fiduciary responsibility to their shareholders, and such microsoft-like tactics are the best way to build the RedHat brand.

    Hey: at least they didn't take the stock-ticker symbol LNUX.

    Before tending to their splinters, may I respectfully suggest you look to your own eye?
  • ...it certainly makes the job of anybody trying to ship "Linux binaries" (well, for C++ only, but the point still remains) considerably more difficult, and could conceivably encourage "Red Hat only" products to be shipped...

    The fact that this statement is possible worries me. First of all, it would be really nice for me as a developer to be told that library "A" is standard for all Linux distros, and they can feel free to (dynamically) link against it, with reasonable assurance that all "Linux" platforms will have library "A" and therefore run. Plus, with certain applications (read: Mozilla/XFree86), I'd rather install pre-built binaries simply because attempting to compile everything is usually much more time consuming to do than using pre-built binaries, and is also more likely to run into problems.

    It would be nice to think that any binary compiled on any Linux system for a given platform would run on any other Linux system on the same platform with a compatible set of libraries (ie, an app linked against glibc 2.1 had better work on my system with RedHat 7.0's glibc 2.whatever). After having Nautilus bomb on me because I didn't have the appropriate compression library v0.9. This really annoyed me since I had a v1.x (forget the x) version installed. It's a newer version! I'd like it to work!

    Actually, people breaking compatibility from a beta to a release is OK. But if glibc2.2 breaks glibc2.1 appps, that'd just be bad. Those who use Windows are familiar with "DLL Hell." Sounds like Linux developers are starting to create "Library Hell." This would be a very, very bad thing.

  • by PiMan ( 2859 ) on Friday October 06, 2000 @06:33PM (#724830) Homepage
    But doesn't Red Hat own Cygnus, ie, the single largest part of GCC development? Couldn't they just ask them?

    I heard a while back the reason MS has problems was (partially) because they had no intercommunication between, say, the browser, the kernel, and the office suite teams. I always thought that free software development was better, you could just ask the next guy "Hey, does your Foo work with my Bar?" But if Red Hat can't manage to phone their other building and say "Hey, is your GCC ready for any kind of release with RH7?" it's kind of disconcerting.

    Oh well. Time for me to start evangelizing Debian to all my friends again.
  • by JoeBuck ( 7947 ) on Friday October 06, 2000 @03:53PM (#724833) Homepage

    The non-Red-Hat members of the steering committee were annoyed mainly because Red Hat did not tell us what they planned to do, and, worse, forbade their employees from telling us. Had we had some input, we could have at least discussed ways of making our lives easier (choosing a version string that makes it clearer that their compiler release was a fork, not a released 2.96, changing the address for bug reports, etc).

    The Red Hat folks say that they will do more advance communication next time. I hope so.

  • by Anonymous Coward on Friday October 06, 2000 @03:54PM (#724834)
    ...considering four out of the fourteen members of the steering committee [gnu.org] are from Red Hat.
  • by edhall ( 10025 ) <slashdot@weirdnoise.com> on Friday October 06, 2000 @03:54PM (#724836) Homepage

    I don't agree with RedHat's decision to ship what was essentially a snapshot, in RedHat's defense I have to say that they were faced with a dilemma. They could either:

    1. Ship with what is probably the buggiest release of GCC since early days,
    2. Delay their release for some indeterminant number of months while the GCC folks either finish their ABI and decide to make an interim release of GCC or finish GCC 3.0 althogether, or
    3. Clean up a snapshot (and almost any random post-2.95.2 snapshot of GCC has been better than 2.95.2) and release that.

    They chose the last option, knowing that there was no possibility of having 3.0 compatibility aside from option #2, and that they'd at least get a stable and largely standards-compliant C++ compiler.

    After RedHat chose a snapshot, they continued to follow subsequent snapshots but because of the necessities of release engineering created patches against the original snapshot, which has lead to the accusation (largely unwarranted) that they have forked GCC.

    Personally, I think they should have stuck with 2.95.2, warts and all, but it was a judgement call for them, and the path they took is not without some justification.

    (BTW, the FreeBSD folks are so disgusted with 2.95.2 that they're considering making a similar move.)

    -Ed
  • egcs still doesn't support the export keyword when applied to templates. That's a fairly significant shortcoming. The compiler is not fine, and those of us who do C++ development for a living are painfully aware of it.


    #include <iostream>

    template <class T>
    class example
    {
    public:
    T data;
    example(T stuff);
    };

    template <class T>
    example::example(T stuff)
    {
    data = stuff;
    cerr << data << endl;
    }
    int main(void)
    {
    example<int>(10);
    }


    ... Now take the above code and separate it into three files. Put the template declaration into a header file, the template definition into a source file, and main() into a third file. Try and compile it.

    It won't. Why? Because you didn't use the export keyword. Ooops. What, that's not supported in egcs? Say it ain't so.

    It's in Stroustrup's The C++ Programming Language, 3rd Edition, but it sure as heck isn't supported in egcs.
  • by rjh ( 40933 )
    Yep, answers, a dime apiece, guaranteed to be worth less than what you paid for 'em... :)

    Warning: I am a C++ programmer. A pretty good one on the whole, I think. C++ is my favorite language to use and develop in, but I'm not a C++ zealot; I also use (and enjoy) Java, C, LISP and Pascal. (Yes, I like Pascal. Get over it.)

    I do not understand why c++ is shunned by so many c programmers.

    Usually because they're not very good programmers. That's the answer, point blank and simple. No language--emphasis, no language--is a universal win; every language has tradeoffs and balances. People who harp about how faulty C++ is have probably never opened their eyes enough to take a look at how faulty their favorite systems are.

    There used to be a guy where I work who ragged on me day and night about how stupid I was to like C++, or how "bloated" C++ was, or... etc. All he wanted to do was rag on C++ and harp on C. One day I got to take a look at his C code: and let me tell you, the guy couldn't code his way out of a paper bag, even if I gave him a hand grenade.

    You see the exact same thing happen with C++ zealots who scream that Java is stupid. They rant, they rave, they scale the walls... and they do this, I've usually found, because they're bad programmers. This is not limited to C and C++ holy wars: in almost any holy war, you'll find the people who are speaking the loudest are the people who know the least.

    If you want to know why C++ is shunned by so many C programmers, there's really only one way for you to find out. It's a two-step process.

    1. Become a C++ hacker.
    2. Become a C hacker.

    Once you do that, you'll see that a lot of the holy wars between C and C++ are completely bogus. Computer languages are just tools; a hacker learns how to use lots of different tools, and then uses the right tool for the job. That's all.
  • I didn't say that other distributions would not be ABLE to be redhat compatible or that they would be unable to use its version of GCC. I only said that otherwise unnecessary effort would be required. My whole point is that redhat might be doing this to force the rest of the industry to follow their lead.

    Lee Reynolds

One way to make your old car run better is to look up the price of a new model.

Working...