Forgot your password?
typodupeerror
GNOME GUI

Guillaume Laurent On GTK And The New Inti 149

Posted by Hemos
from the good-times-bad-times dept.
KS writes: "Old time GNOME hacker and Slashdot familiar Guillaume Laurent has finally written up an explanation on why he left the GTK-- project. In summary, he disagrees with some of the fundamental features of GTK-- but sees a bright future for Red Hat's Inti. I don't know why but I always find these sorts of things eye opening." Update: 08/10 02:50 PM by H :Guillaume wrote me asking me to mention the an update to the story.
This discussion has been archived. No new comments can be posted.

Guillaume Laurent On GTK And The New Inti

Comments Filter:
  • by Anonymous Coward
    Red Hat wants KDE dead for reasons of revenge.

    Red Hat spent a lot of money in the early days rolling out a propretary CDE release that was supposed to be their cash cow. (Tri-Teal CDE). They had it all positioned, along with ApplixWare, etc. to take over the Linux market. Along came KDE and LessTiff, which drew all the energy away from the closed-source stuff that Red Hat had all the shrinkwrapped boxes printed up for at great cost. Red Hat ended up taking a bath as a result, out of reaction they hyped up Gnome as a counter-attack.

    All the Tri-Teal boxes probably ended up about 140 feet above the PC Junior parts in the same landfill.

    O well. Ray Noorda at Caldera is primarily involved in Linux for revenge purposes, too.
  • As for the LS120 drive, http://bugzilla.redhat.com/bugzilla/ is the right place to talk about this - we can't fix problems we aren't aware of.

    Aaaahhh, but you are aware of it. I reported the inability to create an LS120 boot disk for 6.0, and although I haven't tried it for 6.2, I guess from the above comment that it still isn't fixed.

  • And MFC is surely the canonical example of a grossly non-standard hack of a "C++" library. Non-Microsoft compilers that support MFC tend to mandate special command-line switches to enable the necessary compatibility kluges.
  • In Qt I can do it in a little less than 50 or 100 lines!

    #include <qlabel.h>

    int main( int argc, char **argv )
    {
    QApplication app( argc, argv );
    QLabel hello( "Hello World" );
    hello.show();
    return app.exec();
    }

  • by weisserw (121896) on Thursday August 10, 2000 @08:43AM (#865699)
    A long, long time ago, in a university far, far away, I wrote my own GTK-C++ wrapper, titled "ObjGTK+", for many of the same reasons Guillaume mentions. The goal of my framework was to be extremely clear and concise, be well documented, and in general be focused towards making the developer feel happy, which I felt Gtk-- was not. As an example, here's "Hello, World" in ObjGTK+:



    #include &ltobjgtk.h&gt

    class MainWin : public Window
    {
    public:
    MainWin();
    gint OnDeleteEvent(void);
    void OnHelloButtonClicked(void);

    private:
    Button *hello_button;

    };

    MainWin::MainWin()
    {
    SetupTable(3, 3);

    SetTitle("Hello, World");
    SetUSize(300, 150);
    SetPosition(GTK_WIN_POS_CENTER);

    hello_button = new Button("Hello!");
    hello_button->ConnectClicked(CALLBACK(OnHelloButto nClicked, this));

    Add(hello_button, 1, 2, 1, 2);
    }

    gint MainWin::OnDeleteEvent(void)
    {
    OG_Quit();
    return FALSE;
    }

    void MainWin::OnHelloButtonClicked(void)
    {
    g_message("Hello!");
    }

    int main(int argc, char *argv[])
    {
    OG_Init(&argc, &argv);

    MainWin *mainwin = new MainWin;
    mainwin->Show();

    OG_Main();

    return(0);
    }


    Anyway, as soon as this got on Freshmeat, I received some responses (a first!), some encouraging (including some w/ patches, etc.), and one from Karl admonishing me for duplicating his work and urging me to give up ObjGTK and join the Gtk-- project. Now, Karl was very courteous and he did have some points -- a lot of my complaints were really stylistic issues which could presumably be fixed, and Gtk-- had some features which I could not easily duplicate (at the time they were starting to adopt libsigc++, which Guillaume apparently thinks is badly written, but regardless was a very useful idea which worked perfectly in this context. They also had the aforementioned stack-heap duality, whereas all my widgets were on the heap. The implementation of this was so awful I can barely describe it, let alone duplicate it). ObjGTK, although it was fully documented and somewhat useful, was incomplete (hey, I had only been working on it for two weeks), and Gtk-- had classes for almost every GTK+ type and then some. And so after a few such e-mails, I removed ObjGTK from the net and subscribed to the Gtk-- mailing list.


    At this point my C++-wrapper development in Linux was pretty much over. I observed the mailing list for a couple of weeks, decided that the first thing I wanted to do was work on the documentation (at this point Gtk-- had no reference documentation and a couple of short programs acting as a tutorial), and quickly foundered. The source was very hard to work through. The headers seemed to be in three places at once, and the methods themselves were sometimes designed so circumlocutiously that it was almost impossible to tell what they did. It was hard to separate the "real" code which I was supposed to be writing from the code which was auto-generated from the GTK+ header files. And, my naive intentions of "fixing" the style issues were undoable; Gtk--'s conventions were simply too deeply rooted and too at-odds with my "simple"
    style of programming.


    My last run-in with Linux GUI programming was when I had the urge to continue working with an old program which had a GUI interface. I had originally written it in GTK+, then ported it to ObjGTK+ as a test of the latter's capabilities. Now since ObjGTK was dead, I figured the next logical step would be to re-write it in Gtk--. I downloaded the latest libsigc++ and libgtkmm, and got cracking. The first thing I started to write was the toolbar code. I scanned the Gtk-- documentation, found some methods which seemed appropriate, and started writing. It didn't compile because the methods didn't seem to exist. Flummoxed, I sent a message to the Gtk-- mailing list, and recieved a reply from Karl stating that yes, those methods didn't exist anymore, and giving me an example of the "better" way to write toolbar code, which was basically about 50 lines of horrible template code (in contrast to 40 or so lines of GTK+ code, or 20 of ObjGTK).


    That was the last straw for me...not because of the bad documentation or the weird style standards, but because of how much this situation reminded me of the chapter in the "UNIX Hater's Handbook" where the UNIX style (make it easy to program) was contrasted with the Lisp style (make it easy to use). In my mind, Karl had become too obsessed with keeping to his academic theoretical perfection, and had disregarded the thing which make an API worthwhile, namely usability.


    Looking back I suppose one could see Gtk-- as a shame, a deplorable situation that should never have happened. Personally, I disagree. When I worked with QT, I found it easy to use but I thought their pre-processor approach was terrible. Libsigc++ was a much more elegant solution to the problem which we have Karl to thank for. Likewise, I heavily disagree with another poster here who claimed that pointers are better than references. Pointers create unsafe interfaces, whereas with a reference you are always guaranteed to be passing an object which actually exists. ObjGTK+ used pointers exclusively, and if I re-did it today I would definitely change that.


    I look forward to working with Inti; from the descriptions I've read it sounds like a good balance. I'm glad that it's there now since I no longer have the kind of free time I had in college to work on Open Source projects. Maybe soon I'll reorganize my schedule, subscribe to the Inti mailing list and start pestering them with stylistic issues :-).


    -W.W.

  • by bero-rh (98815) <.bero. .at. .redhat.com.> on Thursday August 10, 2000 @08:50AM (#865700) Homepage
    Red Hat wants KDE dead for reasons of revenge.

    Two wrongs don't make a right - and this is two wrongs. We don't want KDE dead and we have no reason to take revenge on KDE.

    Red Hat spent a lot of money in the early days rolling out a proprietary CDE release that was supposed to be their cash cow.

    Sorry, but this isn't true. Proprietary software is not and has never been a major part in Red Hat's business plans.

    At that time, there was no free and good user interface available, CDE was a kind of standard, so it got included, because a proprietary solution seemed to be better than no solution at all. We're glad that this is no longer necessary.

    They had it all positioned, along with ApplixWare, etc

    If we bought ApplixWare, somebody forgot to tell me.

    to take over the Linux market

    If that was our intention, why would we GPL our installers and permit distributions to just copy Red Hat Linux and add/remove/change some stuff?
    Can't be about acceptance - SuSE don't GPL their installer and yet they're widely accepted.

    If you had had a look at Red Hat Linux in the last couple of years, you would have noticed that we aren't including any proprietary software with the exception of Netscape (because it's important and there's no good free replacement, though Konqueror and Mozilla are getting there).
    We don't want proprietary software.

    Along came KDE and Lesstif, which drew all the energy away from the closed-source stuff that Red Hat had and all the shrinkwrapped boxes printed up for at great cost. Red Hat ended up taking a bath as result, out of reaction they hyped up Gnome as a counter-attack.

    Entirely untrue. Red Hat chose to support Gnome because of Qt (1.x)'s restrictive license. The Qt 2.x license is not a problem, the old license was.

    By the time Qt 2.0 was released and the license problem was fixed, Gnome was already usable and at a point where simply dumping it wouldn't have made sense. (Yes, I personally still prefer KDE, but Gnome isn't bad, and it would be a pity to see it go.).

    Next time, please take the time to check the facts before posting.
  • These problems should go into bugzilla, where actual developers can read them and take care of them. (Unless you have a support contract, the support people will help you with installation; some of them aren't qualified to fix bugs.)

    I reported problems back when I used Red Hat *a lot*. I pretty much stopped after 5.0-5.2 and trying out 6.0 made me feel like Red Hat was a pointless endeavor.

    We're trying not to do that - and I'd really like to know if any of your problems are still occuring with the 7.0 beta version.
    Well, I went through quite a few versions of Red Hat (started with 4.2 waaaaay back when and went to 6.0). Now I pretty much install it, play with it for an hour, then trash it and install a distro I feel I can trust. I don't try the betas of Red Hat (as I've kind of gotten the feeling that the 'releases' are beta enough).

    I guess every software company is a bit guilty about this one - if something works perfectly for you, then someone tells you it doesn't work, what would you blame first, if you don't know how much the other person knows about the piece of software?


    Well, I'm guessing this isn't a good way to make friends and influence people. Telling a user he is a moron when the software screws up without listening to what *was* done with it should be a no-no.

    screw the user if you can make a buck

    If that was our intention, we'd be making proprietary software.


    I was speaking of my own personal experience. It sure felt like a screwing for money to me.
    and sling mud when you have no facts

    That's Microsoft's job, not ours.


    I was speaking of the 'KDE sucks!' campaign that seemed to start with Red Hat. (Considering the first indication I got of that little sentiment was on a RH hosted site bragging up GNOME, I always figured that was a Red Hat sentiment.)

    Sorry to be an ass about this, but I did get burned. Just because you happen to work there (and you seem reasonable) doesn't mean you know the entire story. I'm sure you are being true to yourself in your work with/for Red Hat, and I don't begrudge you that, but Red Hat in general has pissed me off in enough ways that I'm not going to say, "Oh you're right. I was the asshole all along. And Red Hat is the greatest thing in the world." Sorry, that's not going to happen.

    If any non-Linux company had screwed up this badly I would have the same reaction (or any other Linux company for that matter). I feel I had problems. I explain those problems when it's necissary. And I'm not going to 'give another chance' to Red Hat. I have real work to do. I've been using SuSE, Debian, and Caldera since that time without problems at all, and have even traipsed into the BSD realm without problems. So my personal feeling is that there is something particular about the Red Hat way that just doesn't work (at least it doesn't for me). So that's that.
  • 4 years ago, BeOS was in developers preview. As I remember it, it didn't have a lot of technologies back then. For an example of the BeOS APIs, take a look at
    http://www-classic.be.com/documentation/be_book/ index.html.
  • Granted, it's ugly, but at least in my experience, it's not as ugly in practice as it looks in theory. For GUI code, the Qt-supplied containers are all you'll need, and QString is a godsend compared to STL string, especially if you're dealing with internationalization issues.

    Down in the layers that do the actual work, I tend to use STL containers for reasons like uniformity with low-level libraries etc. I have yet to encounter a situation where that approach causes actual ugliness in my code - YMMV, of course.
  • by bero-rh (98815) <.bero. .at. .redhat.com.> on Thursday August 10, 2000 @08:58AM (#865704) Homepage
    This combined with some other really bad experiences with Red Hat (both the company and the software)

    Such as? We aren't perfect, but we're trying. ;)

    Red Hat will not be happy until it owns the Linux market. Not just in the sense of most market share, but they want all market share.

    Entirely untrue.
    We'd be stupid if we wanted that, even from a pure business point of view (hey, all those people being paid by Mandrake, SuSE, Caldera and all are FREE developers for us!).

    Similarily, why would we GPL all our developments if we wanted to shut everyone else out?

    they would have been better off to try to create some great Windows rip off. Then they could make it proprietary.

    We could make our installer proprietary. Do we?
    We could have made rpm proprietary. Do we?

    The day Red Hat starts making proprietary software without a good reason (such as having to do NDAs to be able to get a solution at all, which is bad, but the lesser of 2 evils with the alternative being forcing users on Windoze), I'm out of here, and so are a number of other developers. It won't happen.
  • Eh? How is that relivent?

  • 's all right, I was just jesting, in case you're still reading this. But, dude, are you sure you undestand what you wrote in the following sentence:

    ...it's an issue of making effective use of time and deciding when it is ineffective to learn something that doesn't present itself with enough apparent benefit to be worth the cost in time to acquire that knowledge.

    Marcel Proust would have been proud.
    --

  • by Zagadka (6641)
    what are all those args for - I can't remember just now, but you need to know it.

    If you look at the javadocs [javasoft.com], it isn't too hard to figure out. (If the link doesn't work, it's because of /.'s buggy long-line breaker) And there are versions of JOptionPane.showInputDialog that take fewer parameters. They're just less flexible. You have the choice: use the simple, but less flexible function, or the more flexible, but also more complex one.

    I think Einstein said it best:

    "Things should be made as simple as possible, but no simpler."
  • and I get to sort out what went wrong
    This is the hairy part. Exception are, by definition exceptional, and the code have to deal with exception that they never envisionned.

    Not really. Exceptions are merely events that are not part of the "normal" operation of the program. Oftentimes, you have at least some idea of what could go wrong, and can either handle it or complain.

    try
    <get some data>
    except
    on ERecordNotFound do
    try
    <get data from somewhere else>
    except
    on ERecordNotFound do
    raise ENoData.Create('Could not locate the requested data');
    end;
    end;
    end;

    You can also do things like

    try
    <do some stuff>
    except
    on EArea1Error do
    <handle that error>
    on EArea2Error do
    <handle this error>
    on EArea3Error do
    <complain to the user because this was their problem>
    end;
    Using delegation and frameworks from different vendors ends-up with code dealing with exception it was never prepared to use.

    Hopefully, the vendors documented their code properly so that you know what exceptions their functions can throw. At the very least, all of their exceptions should be children of a single, vendor-specific exception so that you can at least catch exceptions by vendor.

    As for handling completely unexpected exceptions, there's always

    try
    <call the top level functions>
    except
    on E: Exception do
    MessageDlg('Exception "' + E.Message + '" occurred. Please tell <contact person> about it.', mtError, [mbOk], 0);
    end;



    --Phil (The code is all Delphi, typed from memory, so don't complain too much about typos.)
  • > If that was our intention, why would we GPL our > installers and permit distributions to
    > just copy Red Hat Linux and add/remove/change
    > some stuff? Can't be about acceptance - SuSE
    > don't GPL their installer and yet they're
    > widely accepted.

    I think suse _did_ GPL their installer. And a while ago, too.
  • by joss (1346) on Thursday August 10, 2000 @04:09AM (#865710) Homepage
    If you're writing GUI apps in C++, fltk is the way to go. I've used X, motif, MFC, GTK, Qt, Java Swing, and god knows what else and the only pleasureable experience was with FLTK. It's very fast, very light, very simple, just lets you get on with it.

    Qt was pretty good - close second, but it's not LGPL and you need to pay for the windows version.
    Fltk is just cleaner. MFC was the worst, closely followed by swing.
  • by superlame (48021) on Thursday August 10, 2000 @04:09AM (#865711)
    You know, like most sane people, I use C++. But I don't have any problem with GTK+ being only in C. I just go on writing my C++ code and using the C interface to GTK+. Who needs a C++ wrapper?

    On a related note, using C for GTK+ was driven by more than just a love of C. It was also a practical decision because it makes it easier to use GTK with other non C programming languages, such as perl, or TCL. The only major criticism I'd have was that they didn't write a GTK-- style system simultaneously. However, I don't really care about that since I prefer my GTK+ straight anyway.
  • by Anonymous Coward
    Why not try fltk [fltk.org]. Its small, fast, portable c++.
  • Who needs another library that doesn't offer any new functionality? When I have to use GTK+ in C++ applications, I simply wrap the most commonly used components myself. What's the point of making the user obtain yet another library?

    Yesterday we were complaining about DLL hell/lack of code reuse, and here we are today talking about another implementation of something that already exists.
  • by Per Abrahamsen (1397) on Thursday August 10, 2000 @04:19AM (#865714) Homepage
    On the other hand, a zillion one-man projects doesn't get us anywhere. And that is what we will get, since there aren't two programmers who agree on everything.

    Which means free software programmers should think very hard before staring their own project. Is your own better design ideas really so much better, that they outweight the advantage of sharing the work with other programmers in an existing project? And don't overestimate your own programmimng ability, like most programmers do.

    Sometimes the answer is yes. This might be one of them. But most times, it is not.
  • I've rarely ever found magazine articles to be much helpful in things.

    Well, to each his own, of course. However, just as C has its well-known idioms (the on-line copy loop being an example), so does C++. It's just that the C++ idioms tend to be more abstract (things like design patterns and such). These are usually well-explained in articles, because a book on the subject would either be too short or contain too many disparate ideas to be of any use.

    I recently read a couple articles about traits that helped me solve a problem I'd been struggling with for a week. Through the judicious use of templates and traits, I was able to greatly reduce both the size and complexity of the system I was designing.

    That I can write C in C++ is probably one of the big negatives for C++ for me. I would be so tempted to just do what I know.

    While I understand your concern, I think it is unfounded. It's how I started out, and while I don't consider myself a guru, I certainly have learned enough on large-scale projects to consider myself more than competent in C++. C++ was designed to be like C to ease the transition for C programmers, and I think it's one of its greatest strengths. One can slowly add language features to the designer's toolkit without upsetting things too much. That fact that you can make use of all the existing C libraries is a godsend.

    Why do I need to "get" encapsulation when I already have it in the abstract sense of the design? C is just the vehicle I use to bridge the abstraction-to-reality gap. Don't assume that because I code in C, that I didn't do anything object oriented in the design (I do to varying degrees in many projects).

    I hear this one a lot, and frankly (please don't take this the wrong way), I think it reveals a fundamental misunderstanding of the purpose of a high-level language. The language exists to help the programmer. It is a tool and nothing more. There is nothing "magical" about encapsulation or inheritance. As you said, one can write OO programs in any language. It's just that some languages make it easier than others.

    C++ is designed to catch errors at compile-time. Strong typing and templates make the use of void pointers almost completely unnecessary (they are useful in the implementations of templates in some cases). Compiler-enforced encapsulation makes it impossible to get at data that shouldn't be got at. I had to chuckle at the comment (in another thread) about const being a horrible component of the language. I know it has saved me countless times from obscure bugs that otherwise would have to be tracked down in the debugger.

    Why write a big, clunky switch statement when a couple of virtual functions will do the job just as well and better separate various components of the design? Another thread contains a good analogy: in C, a loop is an abstraction of a backward goto. In the same way inheritance and virtual functions are an abstraction of unions and switches.

    --

  • I just spent much of my free time over the past 2 years writing a landscape generator/modeller (see terraform.sourceforge.net) and I used Gtk-- for it. As such, I think I have some experience dealing with it.

    At the time I started writing it, QT was not free enough for me to contribute (application) code for it and plain Gtk programming (in C) required such an insane amount of pointer casts that it was pretty obvious to me that I wanted to avoid it. I think Gtk-- is a very nice toolkit although it requires some discipline as (like Guillaume said), it lets you shoot yourself in the foot in a variety of ways.

    That said, I really think Gtk needs a well supported and clean C++ wrapper. I dont care which one it is but I dont want to be faced with the task of rewriting the GUI part of my application every 2 years. This kind of stuff makes me consider using QT, even though I prefer Gtk.

    Oh well, this might just be a case where the beauty of open source (lots of options and development in many directions) has come back to haunt me. I dont care what the solution is, but we need something that is the official Gtk C++ wrapper and is guaranteed to be around a few years from now.

  • indeed :) If only std::string had been common place when the spec was written
  • The C++ network kit is simply a wrapper for the BSD-based API. The are moving networking into the kernel now, and the API will still be C based (because there is no C++ allowed in the kernel.) The network kit will still be a wrapper for the C based functions.
  • This combined with some other really bad experiences with Red Hat (both the company and the software)


    Such as? We aren't perfect, but we're trying. ;)


    I've posted this story before, and since I am now posting in the 'company' of a Red Hat employee will probably end up with a lawsuit on my hands to boot, but here goes.

    I 'purchased' a copy of Applixware back when Red Hat first started selling it from their web store. I didn't receive it for a long time (~month or so) so I called and asked. I was told that it had been shipped to the wrong address (because of a bad number entered in the zip code by the person writing the shipping label or some such) and that they would re-ship it. Another few weeks pass and I get my credit card bill. I'm billed twice for the product I have yet to recieve. I call again. I'm told it got shipped to the same place again and when they recieved it back as a bad address (just like the first time) they just put it back and figured "You'll call us when you don't get it!" So, I ask about the second charge on my credit card and am told that it will be taken care of and the product shipped overnight to me. The product is shipped to the right address (finally), but is in fact the Red Hat OS 5.1 (or 5.2, can't remember) which was (at that time) a lot cheaper than the product I had purchased. I called my credit card to check if I had been reimbursed for the second shipment, and I hadn't. But I had been charged a third time for the same thing. I finally called Red Hat and said, forget it, and reimburse me (and told them I would send the OS back unopened). I was told I could keep the OS (the one positive thing that happened through the entire thing), but I had to accept that product that I purchased from them from a 'legal' standpoint. I said that would be fine if I ever actually *got* the product.

    Anyway, I fought it for another month or so and finally gave up completely. I then called my credit card to stop all payments to Red Hat (as they were still charging me over and over for Applix which I never recieved) and I was able to recover all but the first two payments (it had been too long for me to 'stop payment' on those charges by the time my frustration had boiled to that point).

    That's my problems with the company. Now for the software...

    Red Hat is the only distro that I have ever seen where network services would just fail after a random amount of time. I used to run Red Hat on a 'headless' server in my room used for network testing/file serving/print serving and other network functions. About once every two weeks (more or less sometimes) I would not be able to log in over the network, I would not be able to do anything to the system other than hit the 'reset' button. I was told by Red Hat supporters that it had to be a hardware problem. I guess that explains why I've been able to use SuSE, Debian, FreeBSD, and OpenBSD on that same box with absolutely no problems at all. In fact, the reason I finally gave up on the reboot/restart situation (I didn't know better at the time, I was used to Windows) was that Red Hat ate it so bad that I could not boot the system up to a usuable state at all. And it was not cracked. It was not attached to any outside networks (I used a system seperated from my network for browsing and such). I slipped a vid card in it before reloading and tried to boot. It came up OK (supposedly) but refused to accept any form of input. Then I reloaded with another distro and it was fine.

    I've also had problems with Red Hat desktop systems. I don't know what it is, but it just seems to steadily deteriorate (kind of like Windows does under heavy use). You can say it's all my own fault, that's what I've been told over and over again by Red Hat supporters and employees. But the fact remains that I don't run into this problem with *any* other distro (other than Corel) or any other OS except for Windows.

    From the above examples I hope you understand that I have valid reasons for feeling that Red Hat wants to be the next MS. I got burned by customer-service/sales. I got burned by the software repeatedly (the desktop system I tried running just continuously had problems), and I see no reason other than spite for the attacks I saw Red Hat slinging towards KDE when they first attached themselves to GNOME. Red Hat appears to me to operate on the same principles that drive Microsoft. Make crappy software, blame the user when it doesn't work, screw the user if you can make a buck, and slign mud when you have no facts.

    I realize by posting this I've opened myself up to a huge liability. Online postings are now fodder for lawsuits. And I'm quite sure Red Hat is not above that. As long as you realize that the only things I have of monetary value (in my own name) are my computers and my guitars (and I doubt Red Hat could justify legal fees for the monetary value of those) then you can understand why I don't really care if I get sued over it. I was hosed, and feel I have the right to complain. Normally I just say *really bad experiences* as I said before. But you asked...
  • It's kinda ironic you can still download both completely free. It's also kinda ironic that neither is an example of redhat selling something.
  • by Anonymous Coward
    Well, after much to and fro I am finally starting to get the picture. Gtk-- is being shit-holed, but there is currently nothing to replace it with. So, I guess we should all uninstall gtk-- if we do have it installed and await the next great thing from RedHat, which seems to consist of nothing but a web site with some papers about theory. Talk about vaporware!

    Where does this leave the C++ programmer wanting to develop Gnome apps? I don't see much to work with. Of course developing your own wrappers and classes is not so bad as a way of making the most offensive aspects of Gtk+ more palatable. But why bother when there is Qt, which even the gnomers seem to prefer behind closed doors.

    Meanwhile, Qt came out with yet another enhancement to its toolkit, including a free gui builder. Am I going to check that out? You bet. Where is the story on Qt 2.20? That seems more newsworthy than Gnomers throwing mudpies at each other about the "correct" way to build a C++ toolkit for gtk.

    Even if the Inti project is feasible and does succeed in catching up to where Gtk-- was, extending it to incude Gnome is another matter. Since Gnome is built on top of Gtk, how many layers of wrapping are required? Yet another reason to move Gnome functions which are more generic into Gtk where they belong, but I understand those who have tried such a sensible policy have had a brick wall thrown up in their faces and there does seem to be a hidden agenda to cripple gtk by moving some much needed functionality into gnome, requiring the installation of gnome to access them. Sounds a lot like MS style tactics to me - moving needed functionality for Windows into MS Office and IE.

    So it seems that yet another year will go by, at least, before we have a usable C++ class library for developing Gnome apps. That's what this is all about - the pressure from RedHat to develop such a class library but nobody seems very attentive to that.

  • I'm still reading. I do understand what I wrote, but I don't know if everyone else does. I've never read Marcel Proust. But that might happen in my personal time.

  • Eh^2? It's pretty obvious (at least to me) that Per waws talking about Qt, not Gtk-- in his post...
    --
  • K+L+S! Blintz. Way to show that GTK knowledge!

    I'm personally waiting to see where GTK/GNOME go now that this fellow has left. I use KDE now but I've seen reviews that show GNOME runs alot faster so I may be switching...

  • by Eccles (932)
    Why the hell is new implemented with exceptions? Is there any sane reason for doing tis?

    Abnormal program termination in Mozilla: Access Violation (0x00000000)

    Situations where the person does not check the return value of new (because they write it quickly, and then 99.8% of the time during their testing, it succeeds) abound.

    And what should you do in a constructor if a new used to initialize a member pointer variable returns NULL? You have to restructure your entire object so it is either valid or invalid, and the users of your object must check its valid/invalid state before performing operations on it. This way lies increased complexity and decreased reliability.

    Why the hell do references exist.

    References make it quite clear that a NULL object is not allowed, whereas it is quite common for a function to allow a NULL or non-NULL pointer. While you can write code that dereferences a NULL pointer and assigns that to a reference, it is the responsibility of the programmer who does the dereference to check that the pointer wasn't NULL before doing so. In contrast, it isn't clear whose responsibility it is to check that a pointer argument isn't NULL, and that way lies many, many bugs. (See the above bold, for example.)

    You can use new (std::nothrow) if you really want NULL returns on failure.
  • QT is an imposter. of MFC. and every real C++ programmer out there knows how flawed MFC is. Duplicating parts of standard library is sufficiently ugly, anyway...
  • Gotta agree with that. Nothing easier than calling OpenWindow(), CopyBitmapRastPort() and all the others. GTK/Qt are an OO nightmare (Containers? I just want to open a Window!). Ah, "The Joy Of X"...
  • Erm, if Microsoft purchases either Helix or Easel, all their work is still out there. In a couple of weeks, all the purchased stuff would have new maintainers...in fact, a lot of the maintainers don't work for these companies anyway, so everyone would keep going without a ripple.

    Likewise, if MS purchases TrollTech, then, AFAIK, Qt would still be out there, and free for noncommerical use...however, MS could simply stop selling Qt for commerical use, or make it 5,000 dollars per app sold, and, pretending we didn't have Gnome, completely destroy the Linux commerical software industry. This, BTW, is why I support Gnome. And, this is why glibc is LGPL and everything. Yes, it's better to have OSS then non-OSS software on your system, but having anything is better then nothing.

    -David T. C.

  • Just because my opinion differs from yours doesn't mean it isn't uninformed. I have valid reasons for flaming Red Hat (read my second post in this thread). Someone shits on my head and tells me it's shampoo and I'm gonna get pretty damned upset. Then, if I tell someone else about it and the peanut gallery jumps up to tell me what a fuckwad I am for it, I can betcha that it's going to strike me as more than just a *bit-o-bullshit*.

    I'm glad there's always someone like you around to tell me what a worthless piece of shit I am. It keeps me from thinking too highly of myself.
  • I think the books you are looking for are available at this time. Probably even in on-line form with html tarball to download.

    That would be great. Now if I can just find the time (besides what gets wasted on /. [slashdot.org]) for it.

    They may not have been available 5 or 10 years ago because at that time C++ was not as well-established. What many people fail to understand about computer books and especially training seminars (as were VERY popular with C++ a few years ago) is that their purpose is to obfuscate and mystify, not to explain. There was a lot of money to be made hyping C++ and OOP. Now that the buzz has worn off an emphasis on practicality and concise explanations for the experienced programmer is more valuable.

    Sounds like a lot of man pages and texinfo files I know :-)

    Most of the documentation out there for so much stuff is written with the idea of sequential reading in mind. I don't have the time to do that in most cases, so documentation that gives an introductory concept explanation (without the usual sales talk that most use as introductions), and has all the rest as a well indexed reference, would do better for me (and a lot of people I know).

    I learned C from books and school after having a little experience with assembly language and found C the easiest language I have ever had the experience of using, and one of the most flexible. It really is a portable assembly language but adds just enough abstraction to vastly speed up development.

    I still think of what is being compiled to the machine level (carefully thinking about diverse machines) even as I write code in C. I was able to write a set of functions to write and read arbitrary sized chunks of bits (up to the size of a long) and made it not only work on both big endian and small endian platforms with no tests for endianess anywhere, but the two different platforms could even exchange data between them correctly. Sometimes abstractions just get in the way, especially when dealing with a real world.

    C++ on the other hand has lots of wierd constructs and pretends to be OOP, though that is debatable. There is nothing wrong with treating it as an extension of C at first, even if purists who are hyping OOP to pad their resumes say that it requires a different way of thinking. It doesn't. The best way to understand what is really going on in C++ to use a class browser while developing code. But there are now books which explain it all as clearly as can be explained if you feel like looking for them. In my opinion there remain many ambiguities in C++, so if you don't understand them, then don't use those constructs. Use something else. There's more than one way to code it in C++ as well as in perl.

    There's always more than one way to code it in just about any language, especially assembly.

  • Never post angry. That first sentence should be:

    ...isn't informed
  • Qt is already available under a perfectly free software license.
    Clarification: Qt Free Edition is already available under a perfectly free software license.

    According to TrollTech, Open Source and cross-platform are mutually exclusive. If I'm developing for Linux only, I can use Qt Free Edition. If I want to do cross-platform code for Linux and Windows, even if it is Open Source, I must also buy Qt Professional for Windows, at the single-developer price of $1,550.00 .

    TrollTech isn't pro-Open/Free as much as they are anti-Microsoft. Proof? Questions 20 and 21 in the Qt Free Edition FAQ [trolltech.com]. Share all you want. But if you want to share with Windows users, you gotta pay. Not very neighborly of them, is it?

    Every day we're standing in a wind tunnel/Facing down the future coming fast - Rush
  • > Qt is already available under a perfectly free > software license.

    Yes, but not all Red Hat's customers write free software. Red Hat need a competitive solution for them too. And currently Qt is much more expensive than any of the other widespread GUI tools. You can buy Delphi _and_ C++Builder _and_ Visual C++ (with MFC) for less than the price of a single Qt license.

  • Ok, here's you first C++ lover's comment :)
    You're definitely misguided about the pass-by-reference comment. Sure, pointers have their uses, and sometimes (maybe even often, depending on what you're doing) pointers are the sane choice and references aren't.
    But references guarantee you one thing: they always reference something, there's no such thing as a null reference. Functions taking pointer arguments should check for (assert() or properly handle) null pointers. A function taking a reference _knows_ that it's a valid addressable object (object as in struct, simple variable, or whatever).

    Besides, why do you mention STL along with ``dangerous interfaces'' ? STL is type-safe, and it's a proven implementation (usually - but there could be bugs in glibc too remember). Using std::map<something> is *a lot* safer than re-implementing your balanced tree every time. Sure you can use glib, but STL is type safe and C-casts/void-pointers aren't.

    Did I mention that std::map<foo,bar> will also often be faster than your generic C tree ? if the comparison operator for the foo type is simple (eg. integer compare or similar), it can be (and will be) inlined in the core map implementation, something you cannot do in C without either re-implementing your map every time, or implementing it in a macro. You simply save a function call for every compare - something that is noticable when the compare is a simple operation.

    The real problem with C++ is that people tend to think of it as object-oriented C. Well, it is, but it's also *much*much* more. A C programmer trying to bake up a ``pretty'' API i C++ will often fail - we've all seen that. But good interfaces are definitely possible - take a look at STL. And note, that STL is not just object-oriented wrappers, it's a type-safe interface of objects and functions relying heavily on parameterized types, actually allowing you to write non-trivial programs that run as fast, or faster, than equivalent C code.
    The other real problem with C++ that I have to admit to, is compilation time. It is the *only* real drawback that I can point my finger at. But I can accept it. When the compiler builds type-safe trees of lists of strings for me, that run as fast as they do, I can accept the extra hardware cost as ``fair''.

  • FLTK (Fast Light ToolKit) is available at http://www.fltk.org/ [fltk.org]
    -- Floyd
  • by Sneakums (2534) on Thursday August 10, 2000 @04:31AM (#865736)
    from the looks of inti, signals are still c style callbacks with no type checking?

    No.

    Looking at the headers available at http://sources.redhat.com/inti/inti-manual/ [redhat.com] I see that each class member representing a signal is a SignalProxy, which seems to be quite like the signalling facilities provided by libsigc++. Looking at the definitition of SignalProxy, its connect method is parameterized for both the type of the signalled object and the member being connected to.

    --
    "Where, where is the town? Now, it's nothing but flowers!"

  • objects are just elaborate structures with pointers to member functions, etc., when one looks beneath the surface.

    True. And loops are just glorified gotos when one looks beneath the surface. So? I don't want to look beneath the surface too often.
    --

  • by noahm (4459)
    Sure, C++ can cause much much grief to people who don't really have a firm understanding of when different language features are appropriate and how best to take advantage of them, but the author of the article mentioned being quite happy with Qt/KDE. When C++ is applied in a skillful way, it can be very very powerful. It's just that people will often try doing too much with it. C++ is a much more complex language than most people realize, but if you really know it then you can write very very good code.

    noah
  • Your post is too long and boring. You coulda taken just this one line:

    I have tried languages like CLU [...]

    and summed it all up like:

    I tried to learn a clue but couldn't.


    --
  • Maybe I'm on crack, but it sounds like RedHat is building a completely new development platform based on GTK+ and compatible at the user level with GNOME. They do not actually have anything more in common than GTK! And it's totally motivated by the KDE/Qt development platform at that.
  • Dude - You are definitely not insane.

    These brainwashed OOP zealots are the insane ones.

    There is nothing that can be done in C++ that cannot be done in C just as elegantly. All it does is hide the true functionality of the code. In case you haven't noticed, all the best programmers work in C. That's not because they're too stupid to learn C++. It is because they'd rather be in control. It's kinda like how serious driving enthusiasts prefer a manual transmission.
  • Most of the documentation out there for so much stuff is written with the idea of sequential reading in mind. I don't have the time to do that in most cases, so documentation that gives an introductory concept explanation (without the usual sales talk that most use as introductions), and has all the rest as a well indexed reference, would do better for me (and a lot of people I know).

    This past weekend I picked up The C++ Standard Library [bookpool.com] by Josuttis. I've found this to be a wonderful reference, with sections not only covering the STL, but also strings, numerics, iostreams, i18n and allocators. It has a good TOC and index. I've not read it straight through (or even made an attempt), but it is very easy to find what I need. Explanations are clear and concise. Reading one page of the iostreams chapter halped me successfully derive a new stream buffer and class in five minutes. All previous documents were either too esoteric or verbose -- I couldn't get my head around the problem.

    In a previous post, you suggested:

    If you still want to convert C programmers to use C++ then I suggest writing a book ... a short one, that explains every concept in C++ ... not just language syntax, but practical concepts ... clearly and concisely. Don't drag it out for newbies; focus on experienced C programmers. Explain how it is that C++ takes basic OO design concepts and puts them into a programming language. Explain how C++ behaves with each concept at higher abstract as well as lower real levels. Include a full reference section. And make sure there are examples of whole programs, not just snippets everywhere. Maybe then you might see more converts. But until someone does this, I doubt you will see very many.

    Perhaps you'd be interested in the following books:

    I've only read D&E. This is probably where you should start. It is very small. It's whole purpose is to explain why things are they way they are (i.e. you don't pay for what you don't use).

    In addition, journals like DDJ [ddj.com] and the (now-defunct) C++ Report [creport.com] have good articles about practical software development. I hear many of the C++ Report folk are heading over to the C++ User's Journal [cuj.com].

    The most important thing to remember about C++ is that it is complicated. But only as complicated as you make it. For all intents and purposes, you can write C in C++. A good place to start is using it as "C with classes" to get encapsulation, then move on to polymorphism. It's also important to understand when to use language features (i.e. templates and specialization vs. inheritance) and books like Effective C++ [bookpool.com] help in that regard.

    Hope this helps!

    --

  • poor compiler support on all platforms

    On many platforms, yes. However compilers tend to improve. Non-exception-safe code tends to stay non-exception-safe.

    Which means that if you want highly portable C++ code now, you employ the subset of implemented standards. Hence, the use of exceptions makes less sense, as it is not feasable when portability is a requirement.

    allows for multiple exit paths

    They are fire exits.


    yup, fire exits that allow for the interuption/confusion of normal program flow...a situation, which in many cases, bypasses memory management (or other resource reference handling) facilities. i agree it can be used effectivly, but the cost on design can be high. this is a consideration becomes more important as component object models are used from languages that support exceptions...resources management becomes a shared entity between threads, processes, and machines. this is merely a complication of exceptions, not an invalidation of the concept.

  • Ah, you just reminded me of a fourth problem:

    • Not available on Linux

    And I did specify free (as in beer).

    MFC (last I checked) also falls under the "Poor integration with std C++" clause. As does VC++, BTW.

    And, ironically, so does std::fstream, among others in std...argh!

    --

  • Gtk-- never had the backing of a company like Troll Tech og Red Hat, it was (is) one of these old fashioned free software projects made by volunteers in their spare time. You may laugh at that, but I think they did made some incredible good work, compensating with clever design for the lack of manpower. Sometime driven hackers can do that.

    I'm happy the design will be reused in Inti, it probably represents a much larger part of the work put into Gtk-- than the main code.

    I haven't used Gtk-- in anything but a pilot project, but it as a joy to use. Compared to Qt i liked that it was just a GUI library (which Inti unfortunately won't be), that it used STL, and that it didn't depend on a preprocessor. I believe Gtk-- has always been driven by technical goals, not commercial or political goals, so the Qt license or "Stallmans decrees" have never been important. Inti will probably be driven by commercial goals, but if they keep the good design from Gtk-- and add full-time programming ressources, that will be fine with me.

  • If Troll Tech get bought out, Qt becomes GPL-compatible New-style BSD licensed. If this were to happen, the last valid objection other than NIH syndrome that Americans have to Qt/KDE would falter, and Linux would instantly have a very cool, de-facto standard C++ development platform. I would predict that moc would be banished in fairly short order, and a somewhat libsigc++ standrd C++ solution fitted in. surely a large company such as IBM could afford such a buyout.
  • no, just looked at the sample code. ;-)


    -- Thrakkerzog
  • Well, I haven't read any details on this, but once Borland ship Kylix, I'm sure they'll include some kind of licence that doesn't make you pay for Qt when writing and deploying commercial Kylix apps. So using Kylix might be a way of "using" Qt commercially without paying for it.

    Uwe Wolfgang Radu
  • Second C++ lover's comment :)

    What do you have against exceptions? They are immensely useful. Unfortunately I have yet to see an open source C++ project that even mentions exception safety, and that's a bad thing. In particular, wrappers around callback-based C libraries have almost no chance of being exception safe. And if you use an e-unsafe library in your program, and that library utilizes callback architecture (every OO library does BTW), you can practically forget about using exceptions in your code. Too bad, no freedom of choice :(
    --

  • This is true, but starting your own project doesn't always mean designing and coding from scratch. There's nothing to prevent you borrowing heavily both ideas and code from existing open source projects when starting something new, even when your own ideals and goals are ultimately very different from the project you are borrowing from.
  • Just wait till Kylix ships. I predict there will be an explosion of apps for KDE (and Gnome), which in a couple of years will dwarf the number of apps for anything else. True, many will be trivial apps done by weekend drag-and-drop programmers, but there will also be great ones. Especially in the vertical industries, and internal home-made programs at a lot of companies, I think there will be a huge adoption of Kylix/KDE/Gnome.

    Uwe Wolfgang Radu
  • Some of the complaints against GTK-- remind me of Miguel's Let's make UNIX not suck paper. Flexibility for flexibility's sake, not setting policy, etc.

    As an amateur developer, I have encountered tools where an interface was provided, but every FAQ, on-line doc, and expert on the tool would tell you to avoid it. It always makes me wonder why in hell the interface was built in the first place.

    Many of these dangerous interfaces may be there by accident, or because the original designer did not forsee the danger in their inclusion. But that still leaves them as bad interfaces; you should try to avoid providing such interfaces in a new library or tool.

    My $2E-2 (Actually, I think it's Miguel's $0.02, because I hadn't thought about it until I read his paper yesterday :-)

    Steve
  • Inti is also a Quechuan deity.
    Oh, so that pile of stones outside the Red Hat Labs is their Open Source Pyramid. Now I understand why there are self-serve mortar and water containers next to it.
  • I shuddered everytime he misused the word
    "whom" in that short article.
    So many grammatical mistakes, I wonder if such
    lack of attention to important detail is why
    GTK-- sucked.
  • Inheritance and polymorphism may not be the only features of C++, but they are the important ones. First, templates are a fairly good idea, but they often get one into trouble. A lot of people use templates when they really shouldn't. Also, templates and inheritance overlap a little bit, and the uses of each should be more clear.

    The reason that the kernel, support, netowork and storage kits are non-OO are because it makes more sense that way. That's a big thing in a well designed API. YOu have a unifying concept, but don't use it in places where it really doesn't make sense. It really doesn't make sense to have an object simply for translation loading purposes. You could put them into an object, but then you end up with an object that has members that have no other relation aside from the fact that they are in the support kit. For more important than that, though, is the fact that many functions in those kits need to be accessible from kernel drivers, where C++ isn't really allowed.
  • I think Guillaume's nice discussion about some of the drawbacks of Gtk-- mixed memory management policy is fine as far as it goes, but I would like to add some points.

    First, there are really many alternatives to choose to decide what kind of memory management to use for a C++ program. Telling is that the C++ standardization committee could only agree on one memory management class (auto_ptr [awl.com]<>). It uses gross hacks for ensuring the type checker does the right thing (And I'm not convinced it's right as it is).

    Ok, to get to my real point, here is a list of all memory management policies I could remember having seen used in C++:

    1) explicit deallocation (programmer responsible for deleting; e.g. C++ plain pointers)
    2) strict ownership (e.g. a creation function returning a smart pointer )
    3) transferrable ownership (e.g. auto_ptr)
    4) Stack (objects created first are deleted last)
    5) Static allocation (memory for object always exists)
    6) no deallocation (sometimes you just can leave memory as leaks)
    7) garbage collection (The garbage collector [sgi.com] takes care of deallocation)
    8) Cluster allocator (see "Ruminations on C++ [att.com]" by Andrew Koenig; basically objects are deallocated in clusters, and whenever the cluster is deallocated, all the objects in it are deallocated as well).
    9) reference counting with explicit ref/unref.
    10) Intrusive reference counting (the objects being pointed to contain a reference count)
    11) Non-intrusive reference counting (the reference count is separate from the object, e.g. like boost [boost.org] shared_ptr template)
    12) Handle-Body idiom (you write a specialized handle for managing memory for your class)
    13) Container-managed (like Gtk-- manage())
    14) Containment (like Gtk-- containment based solution)
    15) Library-owned objects (library only returns references without ownership to users)
    16) Distributed garbage collection
    17) Evictor (the objects are maintained in a fixed size array, and the least used objects are deleted when new objects are created that would o verflow the array When the object is next needed after being deleted, it's re-created).
    18) Copy semantics (you always do a copy)
    19) Lazy copy semantics (you make a copy when you have to)
    20) Reaper (The memory is scanned at fixed intervals for freed-up objects, and any objects marked to be deleted are freed).
    21) Shared memory allocation
    22) Persistent allocation (You mmap() some disk space for your objects, and leave it there to allow it to be used on subsequent invocations of your program)
    23) Class allocator (overloading operator new and operator delete for allocating small objects efficiently)
    24) Self-managed allocation (the object deallocates itself)
    25) Singleton (The object is allocated when it's first used, and deallocated at the end of the program)
    26) Mixture of several of the above policies

    The design space for memory allocation of C++ objects is really HUGE. So it's no wonder there is some disagreement on what is the preferred way to handle memory management, especially as many of these alternatives are actually contradictory in that it is hard to combine many of these strategies.

    I personally prefer auto_ptr combined with a non-intrusive reference counted pointer class and creation functions that return memory wrapped in auto_ptr. You do need some solution for putting references to objects in containers, plain auto_ptr doesn't work for that.
  • What are you talking about "vaporware"? Inti is mostly written, you can download the code from the web site and see for yourself.
  • Not the case, Inti will integrate fully with GNOME. It doesn't use GNOME now because it uses unstable GTK and gnome-libs uses stable GTK.
  • Ah, but the 'func(*(int*)NULL);' part is illegal in and of itself (in C and C++), or more accuratly undefined. Most implmentations of C++ references just lazy evaulate the core dump :-)

    There really is no such thing as a NULL reference really means

    • There is no language defined way to get them (although as you show, most implmentations will let you get one)
    • There is no language defined way of testing for them
    • When one is found the bug is clearly creating the reference (i.e. the dereference of the NULL pointer by the caller), and never ever in the code attempting to use the reference

    I don't find those things to be a huge advantage for references. Nor do I find that they make my code cleaner looking (except as a return type so I can have lvalue functions). I find it a modest disadvantage that I don't have the "out of bound" signaling path of passing a NULL to mean something "specail" (default value, or skip that part, or something).

    Left to my own devices I use pointers. Then again I kinda grew up on C (and APL, but we'll not mention that again). On the other hand I really really love the STL. One of the few saving graces of C++. And a huge one at that. Frequently even signifagantly faster then C.

  • I started to learn Gtk-- because the rival, Qt, wasn't C++. In a visceral sense, something that required a pre processor (moc), just wasn't C++.

    So, to me, something that leveraged tempates to acomplish what was, in Qt, little more that a macro hack, was a bonus-- though I never did look at the internals.

    I learned C++ by first tackling the STL, and then moving on to classes and inheritance. Gtk-- fit my (odd) style more so than Qt...

    As for Gtk+: yuck...
  • It duplicates parts of the standard library. For programmers who already uses the standard library, it means you have to deal with two string classes, two vector classes, etc. If nothing else, that is ugly

    Eh? I wrote a whole (modestly) big Gtk-- program and never used strings other the the standard C++ string, and standard C char*'s. Never used a vector other then the STL's. Gtk-- might (or might not) have it's own string and vector classes, but I definitly didn't do anything to avoid their use, and yet I'm also not using them.

  • by spitzak (4019) on Thursday August 10, 2000 @01:41PM (#865776) Homepage
    That sample code is not fltk.

    Not sure what it is, perhaps the JX toolkit?

  • by spitzak (4019)
    I consider the drawing support to be a problem with X. One we have been living with for far too long.

    Trying to solve it in fltk would bloat it up. And a worse problem: any solution we did would not match solutions used by other programmers, so when somebody says "use the font called 'Helvetica'" they may get different results depending on the program.

    X is crap and everybody should realize that.

  • since I am now posting in the 'company' of a Red Hat employee will probably end up with a lawsuit on my hands to boot

    No way, unless you're inventing this to harm our reputation (which I don't think, shit happens).

    Sorry for the trouble with ApplixWare - this is probably too long ago to track down and see what caused it, so I'll take it for a typical "shit happens" thing and make sure it gets fixed at least now.

    For the software, all I can say is that I can't reproduce it and neither can our other customers - if you send me some details of the hardware (and the version of Red Hat Linux you were using), I'll check what's causing the problems. (We have upgraded some drivers; it may be a problem with one of them).

    These problems should go into bugzilla [redhat.com], where actual developers can read them and take care of them. (Unless you have a support contract, the support people will help you with installation; some of them aren't qualified to fix bugs.)

    Make crappy software

    We're trying not to do that - and I'd really like to know if any of your problems are still occuring with the 7.0 beta version.

    blame the user when it doesn't work

    I guess every software company is a bit guilty about this one - if something works perfectly for you, then someone tells you it doesn't work, what would you blame first, if you don't know how much the other person knows about the piece of software?

    screw the user if you can make a buck

    If that was our intention, we'd be making proprietary software.

    and sling mud when you have no facts

    That's Microsoft's job, not ours.
  • NOTE: This rant isn't about anyone in particular in this story, it's just a rant I've been meaning to get off my chest for a long time.

    Until we can get out of the kindergarten-like pi$sing matches that characterise anyone in a "scene", we're sunk.

    There's ABSOLUTELY NOTHING WRONG with multiple languages, multiple toolkits, multiple libraries, etc. No one library, language, or whatever can do everything. Although I'm sure there's a sarky reply coming that C# is the best thing since sliced bread.

    I'm not making any judgments about the people involved in this, but part of me really wishes that the whole "Qt people hate GNOME people, LINUX people hate FreeBSD people, Inti people hate GTK-- people" thing would stop.

    Being "Lord High God" of a given project gives one a certain measure of prestige, sure. But shouldn't hackerdom be about skills, not who's got final signing authority over Project X. There's a HAPPY MEDIUM between anarchy and "well, if you aren't going to recognise my genius I'm going to go play somewhere else, and take my ball with me."

    You see this in ANY crowd, from volunteer paramedics to Goths to football players to insurance salesmen.

  • I'm sorry, but... it's my private time.

    FS development is *not* done by a corperation. A manager can't tell me what I should do (and neither can a zillion users).

    If I want to start a new project, even if it doesn't further any imaginary goal of *other* people, even if a zillion other programmers have done it before me, and maybe even better; quite frankly, that's my time and my choice.

  • assert(&ref != NULL);

    No, no, no, god no. Read last months' C++ users Journel, or Strousup (damm those forginers and their hard to spell names!).

    As "NULL References" are not defined, making one can do anything. They can crash you right as they are created. They can crash you when you try to take their address.

    What you show works on some implmentations, but is a programming error, every bit as much as using memory after you free it is (which normally works until you allocate more memory). But any garbage collected implmetation, or an implmentation that does anything odd with references (perhaps because the CPU is odd, or the ABI) can show the breakage, even if you try to check for the NULL reference, because the bug is not the use of the NULL reference, but the mere existance, the bug starts when you create the reference, and an implmentation can crash at any moment after (or as) you do it, attempts to find and not use the NULL reference are just masking the bug, and not allways masking it.

    Expect a future compiler rev to break such code. Expect porting the code to make the bug come live.

    In other words, you are living the life of a VAX programmer that assumes '0 == *NULL' works, and will forever work (and that 'strlen(NULL) == 0' works as well). Soon everyon will want your code ported to that hot new Sun3 platform, and you will have to find and fix all those damm bugs.

  • What I am saying is that Red Hat is spending their development money to make Red Hat (and Linux in general, since that's their strategy) more attractive to developers of both free and non-free applications.

    Troll Tech is spending their ressources to make their product (Qt) more attractive to their customers, and part of their strategy is to give it away for free to people who wouldn't be able to pay anyway.

    Both strategies make commercial sense, and both strategies benefit both the free software community, and Linux(/Unix) users in general.

    I don't see the need to take side, or declare one for the moral winner.
  • by 1010011010 (53039) on Thursday August 10, 2000 @06:53AM (#865787) Homepage
    You need more tinfoil ... the orbital mind-control laser (in pink now!) appears to be leaking through.

    Redhat releases all of their software GPL. TriTeal and CDE date to before there were ANY desktop environments for Linux, beyond FVWM.

    RedHat didn't include KDE/QT until the licensing was sorted out, because they support the GPL.

    ---- ----
  • by Skapare (16644) on Thursday August 10, 2000 @07:19AM (#865793) Homepage

    If by your remark you intended to imply and mean that programming in C (as opposed to C++) is insane, then fine, I am Happily insane.

    I learned C programming not from the books (though I had both K&R and H&S handy as references to look up stuff) but from actually diving in and doing it. Having started in 1982, I've had quite a bit of time to progress to the point I am today with C programming skills. I also had the foundation of several years of assembly programming before then.

    I did not learn C from the books. I didn't read them. They didn't make sense the way they were written. Obviously they were not written for me, a very experienced programmer from the mainframe legacy with already several languages of experience. Those books appeared to be written for someone somewhere between "newbie" and "academic". They were not written for a working programmer.

    I have yet to find any programming book for any programming language which is written for an audience with specific experiences to draw on, and specific questions that need to be answered about the new language. I was lucky with C because it became quite clear with a few experiments with pointers, and dumping the generated assembly code, and testing out some kernel calls (in TOPS-20 at the time), that C was just "portable assembly code". Neither K&R nor H&S explained things that way. Nor has any other book explained the basis and concepts of their language in terms all experienced programmers could catch on to quicky.

    Learning a new language by the book is for newbies only!

    Now how does this affect C++? Unlike my relation between assembly and C, C++ has no already existant basis to learn it from. While I do object oriented design and then code it in assembly (did that even before I learned C) and in C, I do not comprehend that way C++ has approached the OO principles in its language design. I have tried languages like CLU (from back in the early 1980's) and Java, but they were too slow to commit to and I soon abandoned them because by projects were real, not abstract. None of the C++ books I have looked at are oriented to explaining C++'s particular OO language philosophy. Even Stroustrup's own book didn't cover it the way I needed it.

    While these books probably would work fine for a "newbie" who can spend the next couple years trying, failing, and learning from failure, they do no good for me. I cannot just stop all programming and spend the time as a "newbie" to learn something new. Every time I might even think of something I could do in C++ (or some other language I might learn) it is so temping to go do it in C because I can whip it right out faster than in the new language, debug it quickly, and have it totally kick ass on the computer.

    While I do know of some specific shortcomings of C++ and could use those to say that C++ is not the best choice of language, I also know that shortcomings exist in C as well. But I have worked around them. Specific shortcomings aren't the point, anyway. If you want to know why it is so many people have not transitioned from C to C++, might I suggest surveying C++ programmers and ask them how many years experience they have had programming in C, and compare that with the results of a similar survey of C programmers. You can start with my 18 years experience programming in C.

    If you still want to convert C programmers to use C++ then I suggest writing a book ... a short one, that explains every concept in C++ ... not just language syntax, but practical concepts ... clearly and concisely. Don't drag it out for newbies; focus on experienced C programmers. Explain how it is that C++ takes basic OO design concepts and puts them into a programming language. Explain how C++ behaves with each concept at higher abstract as well as lower real levels. Include a full reference section. And make sure there are examples of whole programs, not just snippets everywhere. Maybe then you might see more converts. But until someone does this, I doubt you will see very many.

    Now if you do want to see an example of how I program in C, visit http://phil.ipal.org/freeware/avlmap/ [ipal.org] and take a look. I suppose this merits my "insane" label.

  • by DrSpoo (650) on Thursday August 10, 2000 @03:26AM (#865796) Homepage
    Inti looks strikingly similiar to QT with its use of Slots/Signals. I don't think RedHat will ever be happy until QT and the whole KDE project is dead, and that is a shame. The licensing issue isn't really a factor anymore, it has been declared "open source" by the powers that be. Now it seems RedHat is more interested in saving face.

    For an object oriented framework class library, QT is pretty darn nice. Not just in theory, but in actuallity (see KDE 2.0 beta). Its had several years to mature as well, whereas Inti is brand new and will undoubtedly go thru all the same growing pains.

    At any rate...I find it hard to get excited about yet another framework (Inti) when there is a perfectly acceptable and mature one available (QT).
  • How much would Troll Tech sell QT for? Or how much would Troll Tech cost? Or how much would it cost for them to GPL QT? Every day they're probably getting more and more expensive. You could score a lot of points.
  • by Anonymous Coward
    A little editorial comment on this article would help. Where is the story here?

    The author keeps harping on how much he likes Qt, yet was working on C++ wrappers for Gtk. There was a disagreement about how to wrap Gtk, so he leaves. But does he join the new Inti project, or does he happily code wth Qt and Kde while working for RedHat? I thought RedHat didn't approve of non-free tools like Qt.

    You mean there were only 3 people at the project's peak working on Gtk-- ? Now there is one, plus this new Inti project which hasn't produced anything by white papers so far.

    I bet Qt is rolling on the floor.... Why should Gtk-- team and Gnomers have to study Qt to do their own C++ ? I thought Qt was non-free and off limits to Gnomers. How much of Qt have they copied? There are certain ethics to reverse enginering such as you don't look at the source even if it is available.

    What's wrong with plain old C with the object system imposed by Gtk+? I thought that it was Stallman's decree to use plain old C allocating objects with pointers, which works for most people.

    How many apps have been written with Gtk-- to date? Would some of those who have used Gtk-- in applications which have progressed beyond the pre-alpha stage care to comment?

  • What we did was just providing a additional ways for the programmer to shoot himself in the foot, and nothing else. The so-called "flexibility" is in fact only in the bullet's caliber. Again, there was no real added value in what we provided. Just a lot more complexity on both our side and the users, and less safety.

    How many open source project suffer from this type of problem? They decide to be increadibly flexible, and try and please everyone, but in the end they end up being too complicated. Commercial products have external pressurers (eg, deadlines) that help them avoid this.

    Not that I'm saying flexibility is bad - just flexibility should be measured against ease of implementaion, ease of use and maintainability.

  • These brainwashed OOP zealots are the insane ones.

    It's these zealots you speak of that give OOP its bad name. For the most part, they don't really understand OO design, and don't know when to an not to use it, or how to use it when it is appropriate. The result is a lot of crappy OO software that shouldn't be.

    I, for one, find that the Object Oriented philosophy frequently results in much elegant solutions, both in design and in implementation. Just because many (self-proclaimed) Object Oriented Programmers don't understand Object Oriented Programming doesn't make it a bad paradigm.

    --

  • It duplicates parts of the standard library. For programmers who already uses the standard library, it means you have to deal with two string classes, two vector classes, etc. If nothing else, that is ugly.
  • > But references guarantee you one thing: they
    > always reference something, there's no such thing
    > as a null reference.

    Not quite true:

    void func (int &a)
    {
    a += 5;
    }

    func(*(int*)NULL);


    That should crash quite reliably =]

    -Matt
  • Why should Gtk-- team and Gnomers have to study Qt to do their own C++ ?

    Because if someone else has attacked the problem before you are rock stupid to not look closely at what they have done so you can see what they did right, or wrong. So what you know might be good to copy (if it fits into your fraework), and what to avoid at all costs.

    Do you thnk AMD isn't one of the first buyers of Intel CPUs? That they don't cut them open with surgens saws as soon as possiable?

    Do you think that Linux kernel hackers arn't looking at NetBSD's USB system?

    Do you think McDonald's marketing teams don't eat at Burger King?

    That GM doesn't buy Ford cars and take them apart?

    How much of Qt have they copied? There are certain ethics to reverse enginering such as you don't look at the source even if it is available.

    I doubt any code was copied. Qt is a toolkit. Gtk-- wraps an existing toolkit. The slot/connection model in Gtk-- is all done within C++ while Qt uses a preprocessor, which makes Qt programs "not quite C++", which I think would sometimes be a pain (not an insurmountable obsticle, but still a pain). Gtk-- makes templates fit the task, which seems to work quite well. I'm not sure if Qt could have done the same with the state of C++ compilers when they started. If I were to do it now I would definitly do it the Gtk-- or Inti way. Being second sometimes has huge advantages. The rest of the slot/connection model is similar in design between Qt and Gtk--, but it is also similar to Smalltalk and other systems that have come before (and I would claim both did a good thing copying a previous succsuful solution).

    What's wrong with plain old C with the object system imposed by Gtk+? I thought that it was Stallman's decree to use plain old C allocating objects with pointers, which works for most people.

    If you attack a strawman at least attack the right one. I don't think Stallman has a lot to do with Gtk+, I think it was Havroc or one of the other GTK+ develeprs who asserts that GTK+ being in C is a major huge good thing.

    Personally I could give a crap less what language it is written in if it (a) works, and (b) has good bindings to the language I want to use. Now I realise that things never actually allways work, so I will avoid using a tool if it is written in Intercal, but just because I wouldn't pick C to do an OO program in doesn't mean I'll avoid it if, if it has good C++ (or Java, or whatever) bindings.

    More importantly I think the "offical" argument has allways been that writing in C makes it easyer to have Python/Perl/Java/Whatever bindings then if it were in C++. I'm not positave I agree. I also don't think I care a whole lot. If I write a GUI app I'll probbably write it in C++ (or Java, but if I do it in Java I'll use Swing anyway). But the offical argument has never been "C rulz, C++ blows goats"

    How many apps have been written with Gtk-- to date? Would some of those who have used Gtk-- in applications which have progressed beyond the pre-alpha stage care to comment?

    Beats me how many have passed alpha. Bu I can say w3juke [sourceforge.net] was really really far easyer to code up then any toolkit I had used in the past (note I havn't used Qt), including some non-Unix ones. But w3juke has not passed alpha. Not really because of the toolkit, but because of lack of documentation, and available time, and the abundent lazyness of the programmer.

    And of corse now I want to come up with another smal task to go code up in Inti just so I know how it compairs...

  • by joss (1346)
    Disclaimer: it might have improved, it is 6 months since I used it.

    Reliability was dreadful, lots of functions don't behave as logic dictates they should, several things plain broken. It takes ages to find these problems and work around them. Your prototypes look good, but then several months down the line you find peculiar bugs popping up - for instance pop-up menus that occasionally just decide to stay visible for ever.

    Complexity - huge amount to learn
    String label = (String)JOptionPane.showInputDialog(frame,
    "Input label for ",
    "Info",
    JOptionPane.QUESTION_MESSAGE,
    null,
    null,
    field.substring(i));

    what are all those args for - I can't remember just now, but you need to know it.

    Verbosity - it took me 5000 lines of fltk to replace 25k of swing.

    Performance - this was the worst. Memory footprint was horrendous, CPU useage was bad. Stick a breakpoint in and examine the call stack sometime - it'll be at least 18 levels deep. This is a sign of shitty design, no matter what anyone tells you.

    But it's still better than MFC.
  • "*(int*)NULL" isn't legal C++, although most compilers accept it.
  • Too bad, their discussions on the gtk-- lists where always interesting and insightful, even if somewhat longish.
  • The thing is that there are too many eostoric features in C++ that would be better left out. Why the hell is new implemented with exceptions? Is there any sane reason for doing tis? Why the hell do references exist. It's like Microsoft, which defines LPSOMETHING as SOMETHING*. People should be clear when the thing they are using is a pointer. The best way to use C++ is really as C with classes. for example, in an API, use it for overloadable functions, single inheritance, and not having to typedef structs. Otherwise, it is too insanely compicated to use. (For a good example of C++ gone bad, take a look at MFC) Although, not all C++ interfaces are crappy. Take a look at Be's. It actually makes sense, they use standard C functions when it makes sense, and there are no multply inherited templated virtual base classes.
  • But why? Qt is already available under a perfectly free software license.

    In fact, Trolltech have said that they would GPL Qt, but they don't believe that it provides protection against non-free software dynamic linking with GPLed libraries.

    There are still a few clauses people don't like in the QPL, but as I understand it, these are being fixed.

    I know some people have a hard time with this concept, but not everybody who believes in Free Software thinks that the GPL is the only or even the best license
  • I wrote a few simple programs with gtk--. I can say that its design does have some flaws, but you could still write some nice code with it.

    First, the program you wrote was plain C++, that's one advantage over poor QT and MFC. That meant you weren't constrained by a foolish macro/preprocessing system. (The preprocessor that was used building the library is no problem, IIRC) The signaling system also was not at all bad, although it is another bloated C++ hack. :) Sorry, but I implemented expression templates and I just don't like ugly compile-time hacks any more. Hard to maintain.

    OTOH, I found a lot of things that gtk-- simply didn't care covering. I've had to plug in lots of C calls in my proggys to get the right behaviour. Anyway, it seemed to work. But try writing a drawing program in gtk-- and you're going to blow up. Another very disappointing mistake is in the apparent lack of documentation. This goes for the GTK+/GNOME people on their high horses as well:

    NO LIBRARY WITHOUT COMPLETE DOCUMENTATION!!

    Get it? That's why Qt still seems to hold the edge despite the zillion disadvantages that it entertains. When you're writing with Qt, if you have a bit of experience with MFC, you can have your browser at your documentation and dive right in.

    Thanks,

  • by ryarger (69279) on Thursday August 10, 2000 @03:47AM (#865823) Homepage
    As it has already been incorrectly stated a couple of times in this discussion, it probably needs clarifying, once again: Qt IS OPEN SOURCE. Qt IS FREE SOFTWARE. No one involved disagrees with this. Not RMS, not Debian, no one. The issue is entirely between the incompatability between Qt's Open Source license and the GPL.

    Now, to stay on-topic: Personally I have no problem using the C API to GTK+, but much of the next generation of programmers will never know how to use it. I think a useful C++ wrapper to GTK+ is a good thing, and if Qt's is better then they should study it and use it's good points.
  • by Anonymous Coward
    Typical example of a C++ project. I see that everywhere. Untrained people that try to do thing the most flexible way and end with a mess. Stack vs. Heap vs. Member based object is typical. STL integration is about the same (ie: much work for nothing but dangerous interfaces). Or trying to do pass-by-reference interfaces (how dangerous. Pointers are much cleaner, if you wasnt pass-by-reference, because both caller and callee _knows_ what is happening). He doesn't talk about it, but there are probably a few const methods in there and throw clauses. Maybe even operator overloading, in which case we have about all the misfeatures of C++ used in a single place.

    I know that many of C++ lovers here will disagree, but it is just because they don't ran into those walls _yet_.

    C++ gave them enough rope to that to hang themselves *and* hang their users. At least Guillaume Laurent understand part of the issue now...

    What makes me wonder is its feeling that C is obsolete. That's fun: the C implementation is working very well, and all the C++ wrappers failed...

    Cheers,

    --fred
  • by bero-rh (98815) <.bero. .at. .redhat.com.> on Thursday August 10, 2000 @03:48AM (#865826) Homepage
    I don't think Red Hat will ever be happy until Qt and the whole KDE project is dead

    Not quite true.
    While we're not exactly the biggest supporters of KDE and Qt, we do contribute to them, and we're including them.

    There's no point in the whole KDE/GNOME flamewars.
    Both have their good and bad points and nice applications (I guess most people who are not fanatics of either one are using a couple of GNOME applications from KDE or vice versa) - so why not give users the choice?

    This includes giving C++ programmers the choice to have a sane interface to gtk+ (such as Inti). It's not an attempt to kill of Qt.

    (FYI: Yes, I have programmed in both, prefer Qt, and think gtk+ needs a more Qt'ish API for people who like C++.)
  • by spitzak (4019)
    I think you are right, I was confused by Slashdot again. It should show comments less than my threshold if there are responses greater than my threshold! The current way is misleading as to what is a response.
  • This past weekend I picked up The C++ Standard Library by Josuttis. I've found this to be a wonderful reference, with sections not only covering the STL, but also strings, numerics, iostreams, i18n and allocators. It has a good TOC and index. I've not read it straight through (or even made an attempt), but it is very easy to find what I need. Explanations are clear and concise. Reading one page of the iostreams chapter halped me successfully derive a new stream buffer and class in five minutes. All previous documents were either too esoteric or verbose -- I couldn't get my head around the problem.

    Sounds like the C++ version of Plauger's like book for C. Something very useful.

    Perhaps you'd be interested in the following books: I've only read D&E. This is probably where you should start. It is very small. It's whole purpose is to explain why things are they way they are (i.e. you don't pay for what you don't use).

    I have D&E and read some of it. It didn't seem to be useful for learning C++ at all. I saw LSC++SD in a "brick and mortor" bookstore, but wasn't impressed enough to hold on to it for more than about 20 seconds. However, Inside the C++ Object Model [bookpool.com] sounds from the title like it might be worth looking into.

    In addition, journals like DDJ [ddj.com] and the (now-defunct) C++ Report [creport.com] have good articles about practical software development. I hear many of the C++ Report folk are heading over to the C++ User's Journal [cuj.com].

    I've rarely ever found magazine articles to be much helpful in things.

    The most important thing to remember about C++ is that it is complicated. But only as complicated as you make it. For all intents and purposes, you can write C in C++. A good place to start is using it as "C with classes" to get encapsulation, then move on to polymorphism. It's also important to understand when to use language features (i.e. templates and specialization vs. inheritance) and books like Effective C++ [bookpool.com] help in that regard.

    That I can write C in C++ is probably one of the big negatives for C++ for me. I would be so tempted to just do what I know. Why do I need to "get" encapsulation when I already have it in the abstract sense of the design? C is just the vehicle I use to bridge the abstraction-to-reality gap. Don't assume that because I code in C, that I didn't do anything object oriented in the design (I do to varying degrees in many projects).

  • I got you to read it :-)

    It's not an issue of could not learn something; it's an issue of making effective use of time and deciding when it is ineffective to learn something that doesn't present itself with enough apparent benefit to be worth the cost in time to acquire that knowledge. I could spend all my life learning new stuff, and there is plenty out there to learn. But I would have accomplished nothing more than learning, and most certainly given nothing back in that process. I prefer the balance I have already taken by learning what is effective to learn, and using what I do learn, and being creative in the process.

  • I'm in my second year of CS studies at the university of Antwerp. So you're free to disregard my comments if you think i'm too "green" to know how a "real" programmer thinks. At the beginning of last year my programming knowledge was mostly procedural, OO was almost alien to me. I had had some experience in Visual Basic, Delphi and Oberon *shudder*. Within one year I managed to learn C++ to such a degree that I could present an entire cgi app used to manage lawtexts (creating fragments, adding annotations, ...). This was co-authored with a group, but there was nothing I didn't understand about it, and in fact I wrote most of the code. So within one year (actually 8 months) I learned C++ profoundly enough so that I feel that I could write any type of application in it.

    Since you are at university, you are in early learning mentality. You are spending less time using what you do know, because there is less that you know. This isn't a bad thing, and in fact it is an advantage. But I do wonder if your cgi app project included full and complete documentation on the administration and maintenance of the system. Did it account for future upgrade needs? Is it scalable? Can it be run effectively in a "five 9's" environment? You'll find there are a whole lot of new things to learn in the world beyond academia. Don't misunderstand me; what you learn there is important. But it is not everything.

    So, now, if you're telling me that you don't want to invest the time to learn C++ because you don't have it, then I tell you that you don't WANT to learn it. I had other stuff to do too, other sujects that needed my attention, but I managed to learn it (btw, after 3 months of school I had already written my first app, an e-mail client in C++). If you really want to learn how C++ works, it's perfectly possible with nothing but Stroustrup's book, a simple "newbies" book (like C++ Primer by Lippman) over a period of a couple of months. The problem is that you seem to be a deeply rooted procedural programmer. There's nothing wrong with that. Anything can be written both procedurally as OO-wise. But maybe you shouldn't try to learn OO languages (please don't take this too personally). Maybe it's just not your thing. OO code can not be thought of in machine terms. It's abstract. That's the very nature of it.

    For me to invest time time is significantly different than for you to invest the same time. In a sense, you may be right as I don't have any great burning desired to learn C++ but I don't have any reason to shun it besides those I have mentioned. Where I am in life is entirely different than where you are. If C++ was there when I was in school, I have no doubt I would be coding in it today, unless something had replaced it (the time since I was in school way exceeds the lifetime of C++ so far). And by the time I did learn C, which I learned only because I was desperately seeking an alternative to assembly, I had already written over 800 apps, programs, utilities, or tool kits, in assembly, Fortran, and PL/1.

    Myself, I like C++. The people that have most problem with learning it (and I've been able to see this in my close environment) are those that have programmed in C for years. C++ IS C, with OO add-ons. So it's pretty hard to leave behind your old C routines and use their new OO replacements. Even for me, who had no prior C experience, it was pretty confusing at first. But, as they say, I have seen the light. I prefer OO. It's cleaner. Although C++ is not exactly the cleanest language to do it in, it is the most powerfull. And it sure beats the pants of C (because it's a superset of C).

    I prefer OO for larger projects. But the OO I learned is rather different than the way it is expressed in C++ based on the fact that the two different schemes didn't mesh. Don't be foolish to assume that just because someone doesn't code in an OO language that they didn't design the project using OO methodologies. Now I don't always do OO, but for larger things I do because it helps organize things. But from so much practice, I can code the OO design into C quite effectively, and don't have any big need to acquire a new language just to be able to code the same design a different way.

    I do look forward to finding, some day, a clean yet effective object oriented language. C++ isn't it for me. Java could have been and it was quite close, but the run time environment, and political/legal issues, ruled it out for me. For all I know C# might well be, but I won't be interested in it until it is at least available in a standard form on Unix (not likely by Microsoft). Or maybe it won't be. If it has an obese run time environment, I will walk away from it quite happily.

    So, please don't complain that the books and other people are the reason you don't seem to "get" C++. They're not.

    As I said, the books are written specifically for a certain kind of learning style which I no longer (and will never again) do. The sad thing is they could be written in the style I would need. Indeed another reply to my post suggested a book that may have potential.

  • That reminds of me of a time when I asked a programmer who did all his application development in C++ and remarked that everything he did was object oriented, "what is object oriented?". He was stumped. He knew he was doing it, because he was using an OO language (which isn't true). But he couldn't define exactly what it was.

    That's not to say that all C++ programmers don't know what OO is. But it does say that some don't, and worse, are using the language as a crutch to cover them from ineffective design.

    The sad thing about OO design is that there are no "OO whiteboards" to make sure you do it right.

  • Why does Red Hat insist on dumping Kde libs and bins into /usr instead of /opt/kde like everyone else?

    Becuase that's what the filesystem standards say.
    /opt is for local add-ons.
    The latest FHS standard has loosened that restriction a bit, but still says upgrades may not overwrite anything in /opt, so putting KDE to /opt/something if it is included in a distribution while keeping updates working is plain wrong.
    My personal preference would be /usr/kde (just like /usr/X11R6), but let's not add yet another location.

    it seems from Miguel's recent white paper "Let's Make Unix Not Suck" that he wants Gnome to be the one and only desktop system for Linux, enforced at a deeper level, even by policy embedded in the kernel.

    Miguel is not Linux and Miguel is not Gnome.
    This sort of stuff can't and won't happen.

    Even if the kernel or X guys should play some Microsoftish tricks in the X server to prevent other stuff from running (purely hypothetical, this won't happen), someone would just fork them and maintain a clean version.

    I don't think all of Miguel's ideas are bad (though I don't think any of this stuff should go into the kernel, that's part of what causes the crashes in MS OSes) - we could probably use some sort of improved component model, but only if it is generally useful and not specific to either Gnome or KDE.
  • The installer actually installs LILO and writes the configuration file to /etc/lilo.conf unless you turn it off.

    As for the LS120 drive,
    http://bugzilla.redhat.com/bugzilla/ [redhat.com] is the right place to talk about this - we can't fix problems we aren't aware of.

    There were a couple of problems in the 6.1 and 6.2 installers; most of them should be fixed in the 7.0 beta.

You can bring any calculator you like to the midterm, as long as it doesn't dim the lights when you turn it on. -- Hepler, Systems Design 182

Working...