Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GNOME GUI

Miguel Says Unix Sucks! 478

alessio writes: "On the front page of Linux Weekly News there is a report from the Ottawa Linux Symposium where the adorable Miguel de Icaza supposedly states that Unix has been built wrong from the ground up." It's actually a pretty cool interview, and as always, Miguel makes his point without any candy coating! The major point is the lack of reusable code between major applications (a major problem that both KDE and GNOME have been striving to fix for some time now).
This discussion has been archived. No new comments can be posted.

Miguel Says Unix sucks!

Comments Filter:
  • Digital Research. And the particular CP/M port/clone that was bought by Microsoft was QDOS by Seattle Computer.
  • by burris ( 122191 ) on Thursday July 20, 2000 @03:05PM (#918486)
    What raptor21 wants will be out in January as Mac OS-X:
    1) Unified standard printing architecture.
    Mac OS-X has this. Since the display uses the same imaging model as printing (PDF), you get WYSIWYG for free. To print you just tell your objects to display themselves into a different buffer. The print panel, printer selection, print queues, and everything else are done for you and shared by every application (and it's not based on lpr)
    2) Resuable components for the primary functions of applications.
    Cocoa comes with many reusuable components. It has the obvious ones: text fields, buttons, scrolling views, matricies, table views, gas guages, check boxes, pop up lists, etc... It has many more. It has an extremely powerful multi-view text system that does multiple columns, rich text, unicode, spell checking, etc... There's tab views and "drawers", font and color panels, color wells,split views, a database independent access kit that rules (EOF), printing, faxing, and more. There's a document framework, undo, an extremely powerful pasteboard system, services, and filter services (plug ins that translate files from a foreign format to something your application can understand).

    All of these resources are shared by all applications, where possible, to conserve resources. Most of them are very easy to use and many require no coding to setup. For instance, to add retractable drawers to the sides of your windows, you just drag-connect lines from the drawer instance to the window instance, to the view to be contained inside the drawer, and a line from the button/actuator-widget to the drawer instance and boom you are in business. No coding...

    3) a standard for user interface (menu options e.t.c) Like edit->prefrences and not tools->options and file properties and every other place .
    Apple certainly has the best reputation for this. All of these details are specified in a UI guidelines document and standard menu configurations are built into InterfaceBuilder.
    4) A standard method for software installation. Like src goes here and binaries go here and so on.An API to make installation easy such that icons get put in the menu and links get crated automatically on the desktop.
    X has a nice built-in software installer. When you install it leaves a receipt you can click on to uninstall or just compress some software.

    X has a very powerful "Bundle" system (from NeXT). A bundle is a directory containing various subdirctories that contain application resources (binaries, source, headers, documentation, images, sounds, strings to be displayed the user, UI's, etc..). Localizable resources (like string, images, UI's) are kept in seperate directories for the region/language the resources are specific to. The Bundle class automatically fetches the proper localized resource based on the user's localization preferences. The Application itself is a bundle and there are bundles known as "Frameworks" for shared libraries. Frameworks can contain anything (code, headers, source, docs, images, sounds, etc...) and are stored together and are versioned (two or more different versions coexist peacefully: no more problems of a newly installed app installing an incompatible version on top of an existing version).

    No API is needed for putting icons into the dock since the user can simply drag the application icon there himself; no having to drag icons into some obscure folder deep inside the system hierarchy.

    Oh yeah, it's all running on BSD Unix with a Mach kernel. The sources of which are available here [apple.com].

    So you see, Unix can be made into a modern operating environment for all users, with a consistent user interface, and an API that is a joy to use for developers. However, they didn't build it on X and you'll probably have to buy a Mac to get it for now.

    Burris

  • I agree that it could have been fixed a lot eaiser. There should have been a common API for these things. It follows some of a pattern for Microsoft, which is coming up with something and not thinking through the implications and building safegaurds. Unlike others, I think Microsoft does do some (minor) innovations, but usually just extending other's work in minor ways. Like dynamic linking. They made it pervasive to the OS and easy to use. But they should have realized it would be abused and needed safegaurds like the API thing you suggested. Instead, we have this hack to protect system .dlls in Windows 2000.
    ---
  • Of course, I would argue that engineering a properly designed system cannot be DONE with open source. The whole premise of open source is not doing design, but hacking code without DOING design. This premise is very well documented by ESR in the Cathedral and the Bazaar and in the Unix Philospohy by Gancarz.

    For a prooperly engineered system you need discipline, and you need rigid standards. You don't just hack code together, and if you do you'll just get another system just as bad as Unix. Good engineering is premised on good design, and the bazaar skips this step. Good engineering is a cathedral. It's not a matter of coding, it's a matter of discipline, design, and standards.

    Your concerns would be more substantial if you'd stop confusing (as I've seen you do before) "open source" with "bazaar-style development", and "bazaar-style development" with "bad engineering".

    They are each entirely orthagonal. Just as proprietary development, as such, never assures good engineering, neither open-source nor bazaar-style development, specifically, assures bad engineering.

    After all, several highly visible open-source projects were developed cathedral-style, and are considered very good in terms of quality (perhaps "category-beaters"): GNU Emacs and GCC come to mind.

    I used the "cathedral" approach to develop g77, also to assure quality, even before I understood it as a "cathedral" model, and after I did, I often resisted "bazaar-style" attempts to "improve" it when I felt they didn't, or wouldn't, meet the quality criteria I tried to uphold for it. (Failures being due to my own personal failings, at least mostly, not the fact that g77 was open-sourced! See my GNU Fortran (g95) page [std.com] for more info.)

    Certainly I agree with your implication that much open-source/bazaar-developed software, including some widely celebrated, is developed to a lower standard of engineering quality than should be the case for products of their ilk.

    But the fault is not that they're open source, or developed bazaar-style. Those are "features" that allow many more developers to participate, with less up-front investment overall, for better or worse (depending on the quality of the developers, and especially their "developments", e.g. patches, as allowed by the project maintainers).

    As far as these three concepts being entirely orthagonal, what I said above is not quite true...

    ...because I'm generally of the opinion that there's insufficient quality assurance of a public software product if that product is not open-sourced.

    That is, without the public being able to view, modify, and try out the source code for a public software product (whether it's a distribution, like Windows or Linux, or the software running a public web site like slashdot.org or etrade.com), I don't see how anyone can claim their public quality assurance can reach the same high level that it (theoretically) could if it was open-sourced.

    Of course, opponents of open-sourcing have long argued that without up-front investments of capital, quality is not affordable.

    That may be true, but IMO the more pertinent issue is that only via open-sourcing can everyone determine for themselves whether the up-front investments that have been made have indeed resulted in a product of sufficient quality.

    So it often amuses me to see people like yourself essentially (as you appear to do) prefer to blindly trust some corporation to produce quality software on the theory that they had the money to do it, instead of insisting on the product being open-sourced so you don't have to trust it, and can look at the code instead, discuss it with friends, muck around with it to see how robust, extensible, stable, etc. it is, and so on.

    Because, in the end, as much as you liked VAX/VMS, in the short time I worked on that type of system, it crashed many more times than Linux has ever crashed on me (about 10 years using Linux versus maybe 3 using VAX/VMS).

    And when I found a bug in the Linux kernel (long ago), I reported it and it got fixed very quickly. (Probably because I provided a patch.) I found it only because I happened to be looking through the source code, not because I actually ran into the bug! (It involved fouling up group-protections of files in one place, IIRC.)

    But when I ran into a bug in VMS, it took a long time to demonstrate it sufficiently as a bug to my management so I could view the source on microfiche, track it down, and then send it to the Black Hole of DEC. To my knowledge, it was never fixed. (It involved random hangs while doing straightforward, but asynchronous, I/O to normal text files. That got me much better performance on a text-to-PostScript converter I'd written, but I had to back it down to using synch I/O, thanks to the bug.)

    Had VMS been open-sourced, not only would I have been more easily able to find and fix that bug and get it out to others...

    ...but you would still be able to use VMS on many different kinds of hardware, today, instead of (presumably, as I do for TOPS-10 and especially ITS ;-) bemoaning its "loss" to the community.

    Whereas those shops that committed to Unix in the early '70s on the basis that it was lean, mean, and came with source code are still able to preserve a substantial portion of that investment by using *BSD and Linux systems, which support a dizzying array of hardware (CPUs and other components), allowing people to pick the hardware that best suits their present needs.

    So, open source is not a panacea, neither is bazaar-style development (despite ESR's tendency to write as if it is), but they aren't inherently going to do anything but improve quality over the long run, since quality includes viability of investment in technologies over time as a component.

  • by matthew_gream ( 113862 ) on Thursday July 20, 2000 @06:18AM (#918498) Homepage

    Consider that Unix and Open Source development is working like a free market: while there is a lot of variety, and while that causes problems, the benefit is that people do want simpler solutions, but instead of 'staying with' some simpler solution imposed upon them, people choose the best are available (e.g. Red Hat), and run with it, and then so does everyone else, and the bad solutions die.

    The interesting comment about people developing Windows Manager skins reflects this: people get fed up with too many window managers, and start to develop skins. Then it becomes possible to have any 'style' window manager, sitting above a 'core' window manager : so then everyone starts to choose the best 'core' window manager. At the end of the day, you have the best solution: an excellent 'core' window manager, and an excellent freedom of different 'styles'.

    The free market has decided.

  • Like I said, X is a framebuffer. It's really just an abstraction layer on top of /dev/fb0. There is no application toolkit. You could do something like BeOS on top of X, because then you're just replacing the framebuffer... big deal.
  • Unix's problems come from its longstanding approach of not deciding policy. The kernel does not decide policy; neither does the C library, the X libraries, or the window system in general. The people who decided that X users could pick their own window manager created a situation where there were many, many window managers to choose from; "they were smoking crack."

    So what's he offering to do? Start "deciding policy" for us? Is this a thinly veiled excuse for heavy-handed GNOMification of existing apps like xscreensaver, rather than the more sensible solution of letting them be visible through GNOME?

    No, you miss the point. It's not about Gnome deciding policy. It's about creating libraries and API's to hack against (if you want to, there will always be choice, Miguel or anyone else in the Gnome camp will tell you that) for common tasks such as printing, image manipulation, font rendering, the toolkit, etc. So, by *choice*, if you want, you can use a set of applications that have some commonality.

    Miguel is an open admirer of how Microsoft does software development.

    Someone please tell me this is a belated April Fools joke!

    It's not a joke, but it might not mean what you think it means. I've had the opportunity to talk with Miguel (while waiting for Phantom Menace to start) about his views on software design, and particularly how Microsoft does it. What he admires about Microsoft is there reuse of code through a set of common libraries and their component architecture. Granted if the code is unstable, you're going to have a lot of unstable applications. That's where he doesn't admire Microsoft. So he wants to pick the good things that Microsoft is doing, and improve, by writing stable, quality code. Check out the code to gnumeric some time if you want to see some beautiful code.
    ----

  • Reusable code isn't everything.

    I'd reccomend designed API's over evolved ones... I hate to kick UNIX while it is down, but the new QNX/Neutrino system API's are very well designed. You don't know what you are missing until you see them. There are about 30 or so system calls total. For example all timer events are encapsulated in one system call. They have implemented thier POSIX, BSD etc. interfaces as a compatability layer on top of these calls. And I'm sure you are all sick of hearing about message passing microkernels, but doing IPC with one system call that reads like "get these buffers to that process and bring me it's response" is soooo nice.

    What I'm wondering is when someone will write a module for Linux that has a really clean, well designed system API like this available. Just because Linux performs UNIX calles doesn't make it a legacy design under the hood. It just supports a legacy system.

    One last point: UNIX support is very very good. The existing free software code base is invaluable, and it works just fine as it is. There is no point dumping support for it, considering the limited expense of providing a UNIX system call emulation vs. the thousands of "man" years of work that has gone into it.

  • Miguel is putting too much emphasis on code reuse and is basically wrong when he says that there is zero code reuse in popular packages like Netscape and Star Office. Microsoft component reuse is high because of market dominance and because Microsoft actually has a working component model (COM).

    Unix, while having no component model, has things going for it that outweigh reuse. The list is ridiculously long but "free" is at the top of my list.

    And, surprise, Unix has a component model now -- in fact, two of them. They are called JavaBeans and Enterprise JavaBeans (EJB). One for CORBA is in the works. Bye-bye Microsoft.

  • You unfortunate experience with X programming is proof why Miguel is not happy with the state of Linux as it stands.

    Actually he has gone beyond that and is upset about other things I havn't been trubbled by yet. Things that may actually be problems. They are also things he is working on fixing, so I'm content to let him complain. When he is done we will see if he was right.

    You can bash Microsoft all you want, but the fact that there -is- standardization for API's and writing drivers makes life much easier for people developing programs.

    Yes, that makes things simpler in the windows world (even if the "standard" APIs change every few years, or less). Lack of stability (as in frequent crashes) make it harder. Gain a little, lose a little. Having done very little windows devlopment I don't have a very informed opnion on which is nicer. My little side trip into it felt unplesent, but it may have gotten better if I had stuck with it.

    Besides, if you want to make money, you do want to write for essentially 85% of the world's desktop computer users anyway.

    I seem to make quite enough money writing server applications for Unix systems, and the support issues are less of a pain there too.

    The "desktop" stuff I do are all hobby projects. Either something I do because I want to see how easy it is to do (like streaming audio over a normal HTTP chanel), or to learn something I can't "afford" to take the risk for in a comercial project (like the STL), or because it just plain looks fun (xtank -- which had it's own threading system).

    Also I have a pet thery about desktop tools. The Unix market may be pretty small, but it is also wide open. If I had a new structured drawing program, I might do better trying to sell it into the Unix market where there isn't a market leader rather then going head to head with Visio on Windows. But that is just a thery. I'll stick to server apps. I'm good at it, and they pay well.

  • That's why it's not going to take over the desktop anytime soon. And frankly I could care less about the millions of home users who don't want to do more than turn on their computer and fire up AOL. They'll end up running Linux anyway. They'll end up running it on their cellphones and their PDAs, in their car computers and their cable TV set top boxes. When they go to work, the corporate fileshares will be big iron running Linux. They'll never know they're running it, but they'll be running it.

    Within the next few years, Linux will dominate every aspect of computing, because it makes sense. A royalty-free OS that levels the playing field. It would be stupid NOT to adopt it. So WHAT if the desktop is the last thing to fall to us, rather than the first?

  • by phred ( 14852 ) on Thursday July 20, 2000 @04:21PM (#918514)
    I gather from the reports that Miguel de Icaza's speech was somewhat reduced in scope from his Usenix presentation.

    At Usenix, his talk started from the premise that "the kernel sucks" but only as a springboard to cover extensively the approach, the philosophy really, of moving away from kernel-centric development to a component focus.

    Now as a relative old schooler in all this I applaud the notion that every generation needs to overthrow the excesses and cruft of the previous one, so to that extent Miguel's to-the-barricades rhetoric is welcome. Unix, and Linux, have become a sprawling pasted-together mess, which is evident if you compare, say, Aeleen Frisch's first system admin book in 1991 with what there is now.

    And some of the principles Microsoft has embraced in software architecture may also be applauded. Although I hasten to add that their implementation of those foundations has broken every conceivable rule of software architecture/engineering, not to mention common sense. Nevertheless, I think Miguel's willingness to learn from good principles wherever they may be found is also welcome.

    But where I part ways is with his proposed grand solution space, which basically amounts to: CORBA.

    CORBA is yet another sprawling, somewhat incoherent and definitely incomplete attempt to Make the World Behave Like We Say It Should Or We'll Stamp Our Little Feet. I have long felt that the dependence on CORBA, not merely the availability, is a millstone around the neck of both GNOME and KDE.

    I've read quite a bit of CORBA and component model advocacy, and it reminds me all too much of IBM-think circa the mid-1980s. "You will use SNA because it's good for you. Here, just implement this spec that comes on four bookshelves of binders."

    The brilliance of the UNIX philosophy is scalability built on self-evolving systems, not based on universal frameworks that try to provide order through mapping. As has often been noted, the map is not the territory. And it should not be.

    But mapping (metaphorically, if not in its strict technical sense) is what component architectures are all about.

    DLL Hell was just the first phase of this. You can argue that DLLs are not "components" by the standard definition, but they are component-like and function in many ways as if they were in such a framework. COM rationalizes and makes DLL World somewhat more orthogonal to the component model, and that is the positive sense that Miguel seems to respond to. I can see some merit to the reuse and iterability inherent in this approach.

    But that is entirely a developer-centric approach, and this is where I think Miguel's vision will be sorely tested, and emerge as at best mixed bag: one part fixing some rather sticky issues for GUI development and near-field reusability, one part creating a whole new layer of complexity and frustration for the user, the system planner and the sysadmin.

    Components are not static; they evolve, and they evolve both in form and function. In other words, wish as much as you might for a static API for a given component (as Miguel sort of did during his Usenix speech), but it's not gonna happen. Both what the component does and what hooks and appearances it presents to the world are destined -- in fact must -- change over time. That's the lesson we learned as soon as Microsoft put out the first revision to the first DLL.

    It is simply impossible to imagine some Component World Authority that has the job of telling every component architect and developer: this is your feature set, this is your quest: go forth (or go C++ or go Java ) and Make It So, and Thus Shall It Always Be! Nuh uh, not gonna happen.

    The advantage of reusable libraries as in C compilers is that while some variation in them may be permitted (whether this is a good thing or not is circumstance-dependent), you are basically faced with a binary result: either the code compiles or it doesn't. Once it does, you have a static binary that will continue to work as long as most of the underlying OS stuff remains the same.

    In Component World, you are now at the mercy of component dependence every single time you run code. And since the component framework is both (1) more dynamic and (2) more distributed than we are used to with our desktop computers these days, this is going to pose major problems. Some of these have already been noted by the GNOME skeptics who have posted here (including at least one well known GNOME developer).

    The problems are inherent in component architecture: compatibility, resilience, security. This is less of an issue when all the components reside in devices connected to one backplane, usually inside one metal box. But with distributed apps and, probably more importantly, mobile apps, this is increasingly going to pose problems.

    Remember when you installed some random program in Win 9x and it changed a DLL so that your email didn't work any more? At least you have the ability to reinstall DLLs/programs/the system itself (depending on the severity) to deal with the compatibility problem.

    W2K supposedly deals with this by creating its own little mini-World Component Authority backed by an internal database subject to all the usual database reliability and performance issues (plus of course it's closed source). All this does is allow a bigger mess to be made at some point.

    But what about this? You're running a nice little cell phone/PIM gadget that is built on GNOME and CORBA, and suddenly you can't get your email any more because some schmucko at a service center upgraded to the latest/spiffiest version of a component your handheld relies on via its mobile link to do its work.

    Welcome to Component Hell.

    As a non-developer and mere observer of the passing landscape, I would be happy to have someone come along and explain exactly why I am all wet. But for the moment, I am persuaded by Miguel's disdain for the suckiness of the kernel, and completely unpersuaded why components, as instantiated by CORBA and GNOME, are a universal solution rather than a local fix.

    -------

  • You can tell it's been a while since I've used windows!! My point still stands, I think. An application should have as it's leftmost and topmost menu something other than "File" -- even if it is just "Options" and "Exit". Don't bother me with details like "Minesweeper doesn't have a File menu". You get my point. But thanks for the correction.
  • by jbarnett ( 127033 ) on Thursday July 20, 2000 @06:22AM (#918521) Homepage

    Miguel Says Unix Sucks! -- SlashDot News Headline

    Now if I posted to SlashDot that Unix Sucks! I would get -1 troll....

    Not saying I would knock Unix, though.

  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Thursday July 20, 2000 @06:24AM (#918522) Homepage Journal
    less sucks less more than more. That's why I use more less, and less more.

    Another one:

    There's a town in MI outside Detroit called Novi. Everyone uses emacs there.

  • I always hear the Gnome and KDE guys talking about the values of code reuse. As a developer, I must agree. However, the most valuable incarnation of code reuse at this point in Linux GUI development would be between Gnome and Kde. How many things between them have been needlessly duplicated?

    Enter the egos. Gnome and KDE will not cooperate with each other, even at the basic levels. If the lack of code reuse is something that really gets Miguel's goat, then perhaps a stronger effort should be made when negotiating with the KDE developers...

    Matthew
  • by roystgnr ( 4015 ) <roy&stogners,org> on Thursday July 20, 2000 @06:24AM (#918525) Homepage
    Because of that standardization, that's why most of the world's commercial software for desktop machines -are- being written for Windows

    No, most of the world's commercial desktop software was written for Windows, because *big drum roll here*... most of the world's commercial desktops run Windows!

    And that's not because of API standardization, or you would have seen people fleeing in droves at the Win16->Win32 switch which forced everyone to rewrite all their software. Borland's OWL libraries and Microsoft's MFC would have destroyed the Windows programming "community".

    That's simply because Microsoft managed to get contracts which put their software on the majority of clone computers, because clone computers, and because Microsoft allowed (some might say forced) network effects to turn that majority into a monopoly.

    The problem with Linux above the kernel level is that you can run into a situation of multiple competing API's for most everything, which can become a bit of a programming nightmare.

    Bullshit. Name one GUI Linux program you've written. Did you try to write it using two toolkits? If not, then exactly how did the existance of whatever toolkits you didn't use make your life a "nightmare". All it did was give you extra choices to find an API you liked best before you started to program.

    Remember, if programmers were forced to use one toolkit, we might be stuck using Xaw, Motif, or even Win32...
  • by Frank Sullivan ( 2391 ) on Thursday July 20, 2000 @06:25AM (#918526) Homepage
    Want ease of use? Conduct this little experiment, which i had to do for work purposes a while back... find me the name and length of the longest filename/path on your Windows box. This took me about three minutes in Linux (Unix), mostly waiting on the 'find' command to finish. Bonus points if you can do this in a reasonable amount of time in Windows without resorting to Perl or Unix workalike utility kits. Super extra bonus if you can do it without using a command prompt, but can get an authoritative answer from the gui. And, if you can't do this problem in either Windows or Unix, i'd suggest you aren't a sufficiently skilled computer user to really judge ease of use issues.

    As for vi versus Notepad... well, a friend of mine has a good ease of use formula. The proper measure of ease of use is the total time spent doing a task. The formula for this is T(l) + nT(d), where T(l) is the time required to learn a task, T(d) is the time required to perform it, and n is the number of times it is performed. So for tasks you rarely do, T(l) dominates. But for tasks you do often (like, say, the several hours a day i spend in text editors), T(d) dominates.

    The essence of this is that while vi is much harder to *learn* than Notepad, it is much more powerful as well, reducing use time. And if you spend several hours a day editing text (like most programmers do), the time to learn a more powerful editor is paid for many times over by the speed gain for complex tasks.

    This is why i recommend to friends who use computers daily, even non-programmers, that they take the time to learn Linux. Not because it's more cool or politically correct, but because it's more *productive*. The learning curve in the short term is paid for by productivity in the long term.

    And THAT, Young Jedi, is ease of use.

    --
  • by Anonymous Coward on Thursday July 20, 2000 @04:19AM (#918527)
    We've only ever claimed that it SUCKS LESS.
  • UNIX does suck. Everyone from Jamie Zawinski to Rob Pike has said this. The only way you can think it doesn't suck is if your experience is too limited. If, for example, you've only used Linux and Windows, then that classifies as limited experience, just as a programmer who's only familiar with C and C++ has limited experience.

    "Suck" in this case, needs clarification. In general, no one says the Linux kernel sucks or classic terminal window mode Linux sucks. The problems have come from trying to use this as a foundation for something more all-encompassing and modern. Is this possible? Sure, but it is difficult. Microsoft and Apple had had the advantage of only trying to support one graphics subsystem, one (admittedly huge) API, and one GUI. Linux developers have to build these layers themselves, and it is hard to keep from stepping on one anothers toes. Gnome and KDE have the same DLL hell as Windows, only they're called "shared libraries" under UNIX :)

    Admitting there's a problem is much better than blind zealotry.
  • Reusable code isn't everything.
  • by Dr. Sp0ng ( 24354 ) <mspong.gmail@com> on Thursday July 20, 2000 @05:36AM (#918535) Homepage
    So Miguel started the Gnome project because of the licensing problems with KDE/Qt. Good for him. Gnome has some excellent ideas, but they've taken it too far. There's no reason to need a fast machine with 64+ Megs of RAM to support a desktop environment, for crying out loud. Now don't get me wrong, I like Gnome and I use it all the time, but it really is a bloated piece of software. I've even written popular software [sourceforge.net] for Gnome (sorry for the shameless plug :-), but I really think it could have been done much better and require less resources. The only reasons I used Gnome for PowerShell is that I like the look and feel of GTK+ much more than Qt or Motif, and Gnome includes a terminal emulator widget which saved me a lot of work. If there was something else that provided similar functionality (of the stuff people actually use, not all of it) and looked as nice, I'd use that instead. Until then, I'll keep using Gnome, but it looks as though it's heading down the Microsoft path of bloated software with tons of rarely-used features.

    And why is it that Miguel is held in such high regard among Slashdot users? He wrote a fairly nice desktop environment. So what? So did the KDE team, but most people can't even name a single person who worked on that project. So he thinks Unix sucks? Good for him. Everybody is entitled to their own opinion, but that doesn't mean that they are right.

    </rant>
    --
  • On Windows and the Mac, there is a common printing model. On Unix, there's not; each application must generate postscript itself.

    ---- ----
  • by Dungeon Dweller ( 134014 ) on Thursday July 20, 2000 @04:22AM (#918539)
    OOP really wasn't a phenomena of the era in which UNIX was originally written. They didn't stress it like we do today (and we don't really stress it today). Also, a lot of the code in UNIX is written by a whole mess of other people. The piping features let you reuse entire programs within each other. I really don't think that UNIX is like, the end all to be all of OS's, but it's not assbackwards, it's just kinda cute in ideology, and probably the best thing going at the moment (popularity + functionality wise). At any rate, yeah, he makes some good points, but I can't say that I entirely agree. If you want people to be working out of the same codebase, form a company, or increase communication at the start. Ad Hoc programming and reusable code go together like oil and water.




    We're all different.
  • Sigh - another moron who thinks computers didn't exist before the peecee. Typical of slashdot. Have you ever heard of an OS called TOPS-10? It predated Unix, and did most things much better than Unix. Multics, another fine OS, also predated Unix.

  • Actually, most people do use Red Hat Linux because it has become an "IT manager approved" item due to the fact that you can get it preinstalled on machines from the likes of Dell and IBM.

    Because RH Linux is so widely used, it means that when people think "Linux" the first company they want to get a commercial distribution -is- Red Hat.

    Anyway, because of RH Linux's open design, you can run whatever GUI you want on top of it, just as long as it's reasonably standards-compliant. I'm waiting for Eazel "Nautilus" extension to GNOME that Andy Hertzfeld (one of the few people who really has a clue about proper interface design) is working on.

    Besides, if you have a Red Hat Certified Engineer (RHCE) certification chances are pretty good that any company that does a lot of work on the Internet will hire you almost on the spot. ;-)
  • UNIX is not playing catch-up on journalling file systems. Linux is, but UNIX had them years ago. For example, AIX's JFS.

    Compare to Win2000, which has an anemic semi-journaled hack of HPFS which is called NTFS; Mac OS 9 or Windows 98/Me, which doesn't have one yet; and OS/2, which didn't get one until AIX's JFS was ported to it by IBM.


    Steven E. Ehrbar
  • Or how about this...the GUI is the text. Multiple windows of text ala an Xterm, clicking on the word disk0 or some such thing would open up another window showing you the contents of the disk0 object.

    Every piece of text is a mouse clickable object. If you type in disk0 it becomes a mouse clickable object which links to the contents of disk0.
    If I understand your point, I think you might be interested in Oberon [oberon.ethz.ch]. This is an OS designed by Niklaus Wirth back in the early 90s. I quote:
    The system is completely modular and all parts of it are dynamically loaded on-demand. Persistent object and rich text support is built into the kernel. Clickable commands embedded in "tool" texts are used as a transparent, modeless, highly customizable and low-overhead user interface, which minimizes non-visible state information. Mouse "interclicks" enable fast text editing. An efficient multitasking model is supported in a single-process by using short-running commands and cooperative background task handlers. The basic system is small - it fits on one 1.44Mb installation diskette, including the compiler and TCP/IP networking. It is freely downloadable (with source code) for non-commercial use.
    They've got a version up for Linux, as well as a "Native PC" one. Doesn't require a hell of a lot of hardware (e.g. a 486DX is okay).
  • Microsoft put GUI code into the kernel? Dude are you fucking slow on the uptake? There's no GUI code in the Windows kernel. All the GUI code lies in Explorer and its libraries, not in the kernel you dumbshit. You can very well mess around with the Windows GUI if you're so inclined, grab a copy of lightstep if you don't believe me. The functionality does NOT lie within the OS, damn you're one stupid fucker. An application is only as functional and powerful as a programmer makes it. If a programmer makes it difficult for a user to use a program that is not the fault of the OS. Is it Linux's fault that most people cry when they try to use vi for the first time? You'd cry foul if someone suggested it yet you accuse it on everyone else.
  • I can't understand how people can be so supportive of something until someone critiques it then they get up in arms as if you insulted their mother. It's fucking computer software you pathetic fuckers. Miguel has an excellent point and a very valid argument. I've seen a few replies that counter Miguels examples and arguments. That is well and good for said programmers and development teams but what about everyone else? Who the fuck cares if the Motif guys have their act together if people can't print or use sound or 3D in their apps. Everyone rallies around modularity and choice but take it to the extreme. If things are broken then fix it, don't defend something that you know is wrong.
    Miguel's point about Unix being stagnant is so true yet people never fail to point or some "new and improved" copy of something thats been done before. The best thing to happen to Unix recently was Mac OS X and before that it was probably Beowolf clustering. Instead of calling OS X Macix or something lame like that, they separated themselves from the guys at Bell Labs 30 or so years ago. Apple took a stable and mature kernel and built a user friendly interface on top of it. Apple doesn't force you into a command line or make you edit things by hand using pico or emacs. In OS X all the configuration files are standardized and formatted using XML. Users of OS X don't have to think about what is behind their candy shaped buttons and mp3 files. They press the power button and get to work or play. The open source alternatives take pride in their non-uniformity and command lines. You're not going to take over the world with terminals, you take over the world by making the computer transparent and letting the applications take center stage.
  • Even though you're probably trolling, I'll play along.

    Unlike Microsoft, Linux never claims to innovate.

    And no, adding some lame extensions to a web browser does NOT count as innovation, especially when Netscape submitted similar extensions to standards bodies before they added them.

    Mozilla's success and .NET's "innovativeness"/success are grey areas, so you shouldn't use either of these in argument - because they are not factual. .NET is vaporware, and Mozilla isn't finished.
  • Either you share the data between the two things that call each other, in which case it can get corrupted if you are using C/C++, or you don't share the data, in which case you need to copy and things are slow.

    That's the usual tradeoff. But it's escapeable with hardware support. Consider a system call, where the kernel has access to the stack of the caller, but the reverse isn't true. Pentiums and above support a similar mechanism for several protection rings, so you could, with suitable OS support, have non-kernel things applications could call but couldn't crash.

    What you really want is a mechanism which can handle a crash on either side gracefully, just as a CORBA/Java RMI call can. This can be done inefficiently now, more efficiently with good OS support for call-type IPC, and very efficiently with some hardware support. For a while, I was thinking of working on L4/Linux [tu-dresden.de] and bringing it up to a usable state, but L4 isn't ready yet. Good ideas, though; it's a must-read for OS designers.

    The hardware needed is already in some SPARC CPUs. It's unused, because the Spring OS people went off and did Java, but it's there.

    As for getting people to convert to safe languages, I agree with jetson123 that it's desirable, but it's politically hopeless. I wish we were all using a Modula family language. I miss the Modula experience: once it compiles, it often runs correctly the first time. But even Compaq/DEC has shelved Modula.

  • by UID30 ( 176734 ) on Thursday July 20, 2000 @06:30AM (#918563)
    <rant>
    I maintain that Unix does not suck, but rather that it is beautiful in it's flexability. This bozo claims that it's weakness is "not deciding policy" ... what kinda horseshit is that? That is it's strength! If I'm working on a server, the last thing that I want to do is get bogged down in boneheaded system policies which were put in place by some software engineer who had NO idea what I was attempting to do.

    Go program for M$, Miguel.

    Unix (and for that matter the entire Open Source movement) is about freedom, not about having mission-critical decisions made by some corporate suit who, incidentally, is only interested in making their company more $$.

    I repeat, Go program for M$, Miguel.

    Miguel claims that a weakness of Unix is in not sharing more code between applications. M$ shares code extensively betwen applications ... Lets put this to a poll ... which is more stable: Unix or M$ Windows (choose your flavour). M$ products are so tightly bound together, in the biggest cluster-f*** of shared code, that an "upgrade" in MS Word brings out a security exploit in Outlook. The patch for Outlook then breaks certain macros in Excel ... which is then addressed in the next round of patches. Meanwhile the poor outside company that is developing application on the M$ platform (lets say Dreamweaver for example) are left behind ... their product simply ceases to work as a result of the upgrade (and subsequent patches) to the M$ shared code base. Please ... tell me ... when was the last time upgrading StarOffice on a Unix platform conflicted with a previously installed Apache?

    Miguel obviously has a LOT of trust in M$ ... I suggest that, in the spirit of that trust, he get duplicate credit cards made and drop off the copies at the nearest M$ office.

    that and ... Go program for M$, Miguel!

    The Unix approach of not deciding policy is "a defense system for hackers," since that way nobody has to take responsibility for a bad decision.
    Making decisions (good or bad) and taking responsibility for them is part of being an functioning adult. The ability to make decisions is essential. To have the decisions already made and have no control over them is unacceptable. Any competent Unix sysadmin knows that security is his responsibility. A Unix sysadmin who has his boxes repeatedly compromised is likely to be out of a job before too long. When a M$ box gets compromised, it is no great shock, in fact the sysadmin of that box can't be held accountable for a system in which he has no control ... and you're only solution is to continue deeper down the M$ path. Who decides system policies on your server?

    I can't believe this idiot sucked me in and made me waste time stating the obvious...
    </rant>
  • by Majix ( 139279 ) on Thursday July 20, 2000 @05:38AM (#918564) Homepage
    Miguel mentions that "if you think GNOME sucks it's probably because you are using Red Hat's version". This is just so true.

    Why does the default desktop supplied by Red Hat have to be so uggly? The icons on the taskbar aren't event lined up properly for christssakes, but seem to be placed by random. The theme is most boring one possbile, and the settings of the windowmanager is enough drive anyone mad. When you've installed the latest Red Hat you have to spend at least an hour to get the settings somewhat usable. Don't even get me started on the *totally* messed up Netscape fonts. What are people new to Linux going to think! They can't be expected to mess around with fontpaths and fontservers.

    The point I'm trying to make is that Red Hat has just slapped the latest version of GNOME available at the time, compiled it straight from its pristine sources and added two links to redhat.com on the desktop. That's just not going to cut it, not this century. If you want to see how a desktop *should* look , straight out of the box, take a look at Helix Code's GNOME version [helixcode.com]. Now *that* is a good looking and behaving desktop, a desktop I wouldn't be ashamed to show a user who knows nothing about Linux. First impressions are important! If Red Hat has any clue they'll be using Helix's versions from on. They are VAR after all, so how about adding some value to the product! It costs them nothing.

    Okay, I'm done ranting now.
  • Personally, I think that the best approach for an application development framework is a server-based model like BeOS.

    And what is X if not a server-based system?

  • You really need to read the Unix Haters Handbook. They have a whole chapter on this, which demonstrates very conclusively that this is NOT the way to do reusable code.

    Yes, you can build up big complicated things by piping together commands, and redirecting stuff, and using sed and awk and perl and grep and find and all the rest.

    THIS IS A CRAPPY WAY TO WRITE SOFTWARE.

    If you change any component, it will break. It will not be portable because every Unix out there has different options for all those commands, and they mean slightly different things. Even worse, there is no error handling. Since all your data is a text stream, dealing with binary data, or heaven help us, actual structured data, (like records, or objects, or whatever your favorite language calls them) is painful or impossible.

    You claim "we've got reusable code running out of our ears". Yeah right. I challenge you to build a sophisticated, portable, maintainable application out of that so-called reusable code.

    Even worse, there is no excuse for this state of affairs. Before Unix was even invented, there were LispStations. On those machines, instead of text streams, there were functions. Functions with error handling, defined interfaces, and even fancy stuff like introspection.

    Unix could have been better.

    Nonetheless, I still like Linux better than anything else out there right now... because it has source, it can be improved. The object models that KDE and Gnome are moving toward sound like a great start. They may not be perfect, but hey... what is?

    Torrey Hoffman (Azog)


  • Yeah, I noticed that glossing over of the difference between applications that run on text streams like cat, ls, grep and frieds, and those applications that want a little more in the way of plumbing than is provided by a plain old "|".

    So, then, presumably if you want change mozilla to use qt graphical componentry instead of gtk (assuming mozilla had a "generic" layer instead of hardwiring one of two competitive but similar GUI toolkits), would it be as easy as changing

    mozilla % gtk
    to
    mozilla % qt

    I don't think so. And besides, the sheer number of types of valves and pipefittings that would be required to express the relations for each kind of interaction would proliferate so fast that even a puntuation-hardened Perl programmer would start to weep.

    But then I got to thinking (dangerous, I know)- why not just robustify the text stream concept to being 2-way XML with some high level publishing and querying of services that are offered/desired and letting the ends of the pipes (applications, components) negotiate the best fit. Kind of like pipes with glop on the ends?

    Loading up the apps and the components with pipe end feelers would surely make then heavier, but at least they would fit more often and more easily than they do now. Some kind of testing and negotiation about what is offered and desired would probably provide some mechanism for resolving things like DLL-hell, too.

    The idea probably isn't new. I kept thinking of autoconf feature tests :)

    Also, while I haven't researched it, some of these ideas must lurk in either Jini or SOAP.




    Speed read /. look for (Score:6)
  • by BinxBolling ( 121740 ) on Thursday July 20, 2000 @06:39AM (#918589)
    The problem with *NIX (and he really doesn't mean *NIX - there's quite enough shared code in a console-only system, the problem is with the X apps he named) is that X windows is just an overgrown framebuffer, not an actual graphics and development kit. If you look at something like BeOS, it provides a whole bunch of "servers" (a microkernel design) that handle different functions, but X windows was an overgrown framebuffer stuck on top of a command line to provide a clock, a load monitor, and a terminal.

    True. When I read this:

    The Unix approach of not deciding policy is "a defense system for hackers," since that way nobody has to take responsibility for a bad decision.
    I couldn't help but think of X. The lousiness of that system is the best example of the problems that come when you avoid policy decisions. And the awful arguments made in X's favor whenever the topic of its suckiness comes up in Slashdot are certainly consistent with the idea that this avoidance of policy decisions is a 'hacker defense system'.

    Probably the best example of what Miguel is talking about is the difference between what you can do with cut-and-paste in X and what you can do with cut-and-paste in Windows:

    • X has more or less followed the Unix paradigm where 'everything is a file' - which really means that everything is a flat, unstructured lump of text. Or at least, this the only way that most programs have to interface with external data. This is fine when the data you're dealing with is relatively unstructured: A csv file is tolerable for, say, a list of phone numbers, and simple text oriented tools like sed/awk/grep/perl are okay for most of the operations you'll want to do on such a file.

      But once you move to a graphical environment and thus aquire the ability to effectively represent much more structured data to the user, you need to provide higher-level interfaces to that data. Those text-oriented tools will be pretty much worthless if the file you're dealing with contains a description of a structured drawing. As a result of X's adoption of the Unix approach, all you can really cut-and-paste in X is (surprise!) flat, unstructures text strings.
    • In Windows, on the other hand, you can cut much more structured data to the clipboard. You can rip out a piece of a Visio drawing and stick it into Word, and it will retain all of its structure. This is because Windows provides facilities that make it relatively easy for programs to expose higher-level interfaces to the data they generate. This permits a degree of application interoperability that X apps can only dream of.

      This pays dividends far beyond cutting and pasting: strong application interoperability means that you can easily access and 'reuse' an existing application's functionality. An example out of my own experiences as a Windows developer, a few years back: I once spent a month working on a project whose goal was to build a graphical scripting tool for a specialized purpose: Users would draw out simple flowcharts, then our tool would generate code from these flowcharts. Rather than build our own flowchart-drawing tool, we were able to use Visio: We designed a set of custom Visio shapes that users could use to draw flowcharts. Then, the development environment we'd built would send users into Visio whenever they wanted to edit charts. When the user was done editing, the development environment would talk to Visio via OLE automation, pull out a highly structured description about the flowchart (basically, a list of all the symbols and their types (including some parameters that the user could specify, such as the conditional expression for a decision symbol), and of the links between symbols) and build a simple C++ representation of the chart that the code generator could then take as input. My job on the project was to build the layer that talked to Visio and built the code generator's input data structure, so I dealt pretty heavily with OLE. It worked out great for us, saving us an enormous amount of development time. And we ended up with a much higher-quality final product - instead of building our own mediocre tool for graphically editing flowcharts (which we would have probably ended up having to do if we were working in unix /w X), we were able to provide the user with the much more powerful, mature facilities of Visio.

    I liked Miguel's comments. I'm glad to see that someone is willing to stand up and say that while the emperor may not be completely naked, he should probably put on some pants...

  • by DoktorMel ( 35110 ) on Thursday July 20, 2000 @06:40AM (#918591)
    And, if you can't do this problem in either Windows or Unix, i'd suggest you aren't a sufficiently skilled computer user to really judge ease of use issues.

    I would say that if you can do this problem you are too skilled a computer user to really judge ease of use issues. Ease of use is not for those of us who already know how to make a computer sit up and bark. It's for democratizing the power of the computer and offering it to that other 99% of the human race, the clueless (with the assumption that as a result, some of them will get a clue).

  • by shutdown -h now ( 206495 ) on Thursday July 20, 2000 @05:43AM (#918596)

    I think UNIX [bell-labs.com]did alot to change the way OS design was viewed. UNIX treats everything as a file. UNIX focused on making a system with multiple users on the same system at the same time.(multiprocessing anyone?)

    I think the boys over in Murray Hill [bell-labs.com] are doing alot now with Plan9 [bell-labs.com] and a few other ideas I sometimes hear they kick around.

    My question to all of you obviously more experienced coders out there:

    What's the next paridigmn for creating the next less sucky OS?

    Treat everything as a data object? a module?

    I don't know. I would love to see an OS based on a functional programming language. Something small and compact without too much bloat to it. Code up a decent GUI as well. Or how about this...the GUI is the text. Multiple windows of text ala an Xterm, clicking on the word disk0 or some such thing would open up another window showing you the contents of the disk0 object.

    Every piece of text is a mouse clickable object. If you type in disk0 it becomes a mouse clickable object which links to the contents of disk0.

    Perhaps we would arrive at a new GUI or a new concept that makes either more sense to users, or perhaps is faster to operate with, with minimal learning curve.

    A natural language based OS?

    A user can type in his questions (eventually speak to the computer ala voice recognition) and receive textual and aural inpouts from the machine. I.E. "Computer, please tell me the contents of disk0." "The contents of disk0 are, foo.txt, bar.c, baz.h"

    Eventually somebody or something has to sit down and figure out a different way of looking at the data we are presented and see if it makes more, or less sense than what we currently have.

    I don't know who that somebody is but I think it won't kill me to sit down tonight and see if I can come up with a few ideas.

    I'm thinking about using a functional language because it forces me to look at things slightly differently than when I write C code.

    Anyone else have any ideas or pointers to projects currently looking at stuff like this?

    It would be a nice project to jump in to, no?

    Dan O'Shea

  • Im glad someone has finally taken the time to point out the truth. Linux is NOT a platform for innovation. Why? Well first off - its based on an OS whose design originated more than 20 years ago.

    Now alot of you dont believe Microsoft has made or ever will make any effort to innovate. Those of you that do believe that need to remove those pink tinted glasses that have become eternally attached to your face and finally face the truth of the matter.

    Like it or not, your favorite monopoly is innovative. Though whenever they are innovative - the so-called community here on Slashdot is able to twist and turn it into something vile and evil.

    Dont believe me?

    Everytime Microsoft decides to enhance the functionality of their web browser (which is used by 86% of the internet population) the Linux community whines and complains that Microsoft is simply manipulating publically mandated standards in order to raise the value of their stock options.

    The fact is however, Microsoft does own 86% of the market as far as web browsers are concerned (over 98% when it comes to Operating Systems) and in order to keep that position they have to do something that anybody in the Linux community has yet to even dream of doing: INNOVATE.

    Look at that sorry POS software you all so lovingly refer to as Mozilla. Thats probably the most dismal failure of the Open Source movement as of yet. (Though I assure you it will not be last) Ever since Internet Explorer 4.0 - Netscape has simply been unable to compare in either featureset, speed, stability, or ease of use.

    This brings me to my next point however..... has anybody here heard of Windows .NET? Gosh - that idea sounds pretty damn original to me - though Im sure you guys will figure out something to pass it off as yet another "Microsoft ploy to control the world". I'm also almost 100% sure that they stole the idea from some half starving Linux developer who spends his few minutes of freetime after contributing to the absolute purity of the Open Source Movement begging the local populace for a few pennies so that he can feed his poor family.

    BS.

    Over the last five years - Linux has been playing catchup - whether to Microsoft, SCO, or somebody else - it doesnt matter. As of current your favorite penguin sponsered OS is suffering from feature and driver depervation. This will continue to be the case until you people begin to see things as they should rather than as you want them to be.

    Remember - Not only does X suck (being one of the most insecure programs on the face of planet earth) - but so does KDE, GNOME, StarOffice, GREP (never did care for that one - who the hell came up with that name anyway - some of those berkley boys must of been toking when they came up with that one)..... need I say more?

    Oh and by the way - your flames may be directed to: darkgamorck@home.com

    Gam
  • by RayChuang ( 10181 ) on Thursday July 20, 2000 @05:43AM (#918603)
    You just hit it right on the nose.

    You can flame Microsoft all you want, but the fact that Windows has a singular WIN32 API drastically simplifies program development and software driver development. Because of that standardization, that's why most of the world's commercial software for desktop machines -are- being written for Windows.

    The problem with Linux above the kernel level is that you can run into a situation of multiple competing API's for most everything, which can become a bit of a programming nightmare. That's why people are gravitating towards supporting Red Hat, Caldera, S.u.S.E. and TurboLinux commercial distributions, because at least you'll know what API's to program for with each commercial distribution of Linux. Is it small wonder why Red Hat has become the "de facto" standard for Linux almost everywhere?
  • I think windows is sucessful because it came at the right time and with the right mix of new technology, ease of use and reliability.
    Well, I think Windows is successful because it's the successor of DOS (in fact, DOS is still in there, for the most part, despite what MS says).

    MSDOS is successful because the company that created the architecture that is now the most popular in the world initially only sold systems with MSDOS. The architecture is successful because it was dirt cheap (relatively speaking) when the clones hit the market.

    I think saying Windows is successful is like saying the color NTSC broadcast signal in North America is successful - after all, that's what all the stations broadcast, and that's what all the TVs receive. The color signal is horrible because it had to be compatible with the black and white signal, which still takes up half the bandwidth. The entire color portion of the signal gets only the other half of the bandwidth.

    I'm not saying Windows is bad, and I'm not saying NTSC is bad, I'm saying they stuck with backward compatibility instead of creating a better technology from the ground up. Both could have been so much better. Some things allow for being extended, some things are kludged into being extended. NTSC signal and Windows are examples of the latter.

    Miguel is right, but I don't think it's about Unix as much as it is about a computer/human interface. Windows DOES do it right - there should be one printing dialog, applications should share interfaces to the hardware - printing, scanning, the graphics subsystem, etc. But I don't think it's Unix so much as it is the X Window System. I believe it's a layer in between. Actually, so does Miguel. He shouldn't have said that Unix sucks, he should have said X11 needs a framework to allow portable and reusable code. After all, like Windows, when you quit the GUI, you are stuck with DOS services. If you use an old Word Perfect under DOS, you still need it's collection of fonts and it's printer drivers.

    I agree there should be a combining of efforts into one user interface, compatible window managers, and so forth. I also think there should be (and will always be) a set of applications that don't fit this framework. There's nothing wrong with the Unix mentality of "do one thing and do it right", even though the existence of a do-it-all framework would be a great addition. Just don't force anything on anybody - that's the Unix way.
    ----------

  • by Veteran ( 203989 ) on Thursday July 20, 2000 @05:44AM (#918607)
    Unix does not have reusable components for the same reason than Life doesn't have reusable components.

    Think about how easy life would be if only we could reuse existing components. For example I'll build my life by taking the 'Bill Gates wealth component', the 'Alan Cox programming component', the 'Jean Claude Van Dam appearance component', the 'James Bond suave component', and the 'Sarah Michele Gellar girlfriend component'. Nice life huh?

    Of course, if everyone else gets to build their life the same way, it becomes a mediocre life not worth living. If everyone gets to choose to be as wealthy as Bill Gates then everyone is equally poor; prices would sky rocket until a loaf of bread was a billion dollars.

    If everyone could program like Alan Cox there would be no demand, or use, for you as a programmer. Why would anyone get you to do the coding when they could get any of 6 billion people to do it?

    Unix provides a stable base and a uniform API for applications, good design decisions flourish, bad ones die out.

    The problem with the reusable component approach is that it requires bad design decisions to flourish. If there is a poor design decision made in a commonly used component it can't be corrected because of the number of programs it would break if it were changed. Instead of the fittest surviving, the most popular survive. What is worse, there is no basis for comparison and improvement, all programs take on a uniform boring sameness; there is no good or bad to choose from, and learn from. No evolution can take place.

    What the component approach does is guarantee that bad design decisions live forever, because no one knows they are bad.

    Component programming is like a good looking, but heartless woman; looks great at the start of the relationship, but the marriage is a horrible one.

  • actually, win2k does do things different. That's why it's so much more stable. You can read about it, search MSDN for the "end of DLL hell" article (I don't have it handy). This protection causes other problems, but solves that one. And it allows a program to put an older DLL in it's app directory and load that if it needs it.
    ---
  • I think it's important to keep compatibilities on the lowest possible level. It's important to resist conformity just for the sake of convenience rather than for technical reasons.

    Case in point, I recently installed Oracle on Slackware. Oracle recommends installing it on RedHat, and has a nice bundling arrangement. I'm sure most corporate Oracle for Linux users will stick with Red Hat. But the fine print reads that all Oracle needs is a recent kernel, compiler, and c library. In practice, it was necessary to add a few symlinks to mimic the Red Hat locations of some basic tools, but other than that is was fully compatible. Oracle uses NOTHING that is unique to RedHat, but they make a point of only supporting that distribution.

    My whole decision to run Slackware rather than RedHat was that if I wanted default decisions to be made without my knowledge, and GUI-only configuration, I would have stuck with Windows.

    Reusable code is fine, but like someone already mentioned, console-only *nix gives you that. I don't understand why the "Desktop Environment" projects feel it's necessary to re-implement everything with a GUI. Do we really need a GUI to dial into an ISP, when we can just as easily run a script from either xterm or (gee-whiz) a window manager configured root menu or hot-key.

    If we're just trying to mimic what Microsoft has done with Windows, we will only look comparatively sloppy and inconsistent. IMHO, the beauty of Unix and Linux is the Unix philosophy. Take how *nix handles email: sendmail is pretty standard, but there are alternatives, and your decision to use one of those doesn't affect who you can communicate with, or which clients the user must use.

    I think people should be able to choose their window manager without having that affect what applications they can run. They should be able to choose a between several different browsers, email clients, instant-messaging clients, file managers, terms, menus, etc. The re-use of code should be on the lowest possible level, so that these choices can remain independent. If I am forced to choose between All-GNOME or All-KDE, I would choose neither.

  • I work on corelinux (http://corelinux.sourceforge.net [sourceforge.net]) which is aimed at providing the very things Iguaza is complaining about (reusable components)

    I still think that resuability is best achieved using C++ and OO languages.

  • Unix is not a platform of innovation.

    Take the biggest development in all software markets in the last five years: the internet.


    Unix was a platform for Internet innovation 15 years ago, and Web innovation 8 years ago. What Internet innovations would you be refering to IN THE LAST 5 YEARS? EMACS 21.20030341458587? NcFTP? All of the really cutting edge work (Apache's sub-projects, IPv6, component development models, high end filesystems, etc) are all either being developed as cross-platform projects that UNIX is only one target for, or UNIX (and Linux) are playing catch-up on (e.g. journaling filesystems).
  • You'll notice that I said Now on the other hand, Unix applications until very recently did not have the cross communication problem that Windows apps had before the line you quoted.

    Until very recently (the advent of Linux on the desktop) Unix was primarily used by developers and systems administrators. These are people who's primary tasks can all be solved by either editing text files or piping together applications on the command line. There was and still is no major need for developers and/or sysadmins to be able to embed applications or objects in one another in a GUI.

    On the other hand, several end user applications can benefit of being able to embed applications within each other and share data in a uniform manner. That is why I noted that maybe it is time for the paradigm to shift.

    PS: Of course there are many pitfalls that have to be avoided such as the library version conflicts (Windows DLL hell) that occur when an app is upgraded and uses more recent components than its the others on the system.

  • Well when my company presented me with a stock install of Solaris it came with vi and ed. No sign of emacs or pico (which i built for myself).

    Certainly it had a couple of graphical editors but quite honestly i'm happier with pico.

    However to get gedit or gnotepad to work I suspect they need to be compiled etc... This involves me first finding a copy of sunwspro with which to compile gcc, then using that gcc to attempt to get the open source stuff to compile... but hang on i dont have root access so I have to piss about with paths for ages.

    Quite honestly i've never got anything more than about 50k of source to compile properly here. It's a bit demoralising.

    At least solaris does have fairly good binary compatibility but then all the binary packages assume you are root.

    And with respect to the comment about end users not being able to install win98 - you are wrong.

    Considering that a win98 install comprises, putting the cd in, turning on the pc (possibly setting th bios to boot from cd first) then clicking next several hundred times.

    You'd be amazed what end users can do when their windows installation crashes before a big deadline. I've even known mothers of my friends to be capable of a reinstall... and that's saying something :)
  • I have news for you.

    Care to explain why Red Hat Linux has become the de facto standard for Linux? The reason is very simple: IT managers want -standardization-, which drastically reduces support and programming costs.

    Because the likes of Dell Computer and IBM are big supporters of Red Hat, the fact that Dell and IBM will provide technical assistance in supporting Red Hat Linux means instant credibility for Red Hat in the corporate world, and it's probably the big reason why Red Hat Linux is the current de facto standard.

  • Its open source. Do something about it. If you don't like it, change it. If its broken, fix it. Its the open source mantra.

    I've been thinking about this a lot, lately, and I've come to the conclusion that this really isn't true anymore. The Bazaar's been bought out, leveled, and turned into a strip mall.

    For example, here we already have two groups (GNOME, KDE) whose architectures, approaches, and hidden assumptions are basically entrenched in the marketplace. The "community" has already decided that we shall use CORBA (with all that entails). It's already been decided that we're going to use the same basic windows/mac/amiga hybrid interface (look and feel between KDE and GNOME are basically the same IMO). Other window managers are begrudgingly supported, but each environment has a definite pressure towards the One True Window Manager. It's already been decided that the ideal free office suite is essentially going to be a pale copy of Microso~1's suite... I could go on but you get the point.

    Honestly, I'd love to help change this, but think about it. If a third team came from out of nowhere and proposed/implemented a simpler component architecture that wasnt so tied to one GUI (or tied to a GUI at all -- GUIs should be wrappers, not core software), or tied to one huge set of libraries, that didnt require developers to buy into one overarching desktop environment... or that wasnt subsidized by RedHat, TrollTech or Corel for that matter... what do you think would happen? It would go undernourished and die a slow whimpering death, amid cries of "but we already have one component architecture too many!"... assuming that anyone noticed it at all.

    There's no point, anymore. It's actually become a very repressive and stifling environment. It's the 1980s all over again.

    Hmmm... does Miguel have the courage to take a step towards consolidation?

    To hell with consolidation. Some of us still believe that UNIX is about innovation, diversity, and beautiful, sweet, ubiquitous chaos. >}:)

    :Michael

  • HUGE DEAL

    It provides a level of abstraction on top of /dev/fb0 which gives you NETWORK TRANSPARENCY. When you write an application on X it can display on any X server.

    It may not be perfect, and it may not be fast, but it's a lot more then you make it out to be.

    It's a much better platform the target an app kit to then /dev/fb0. And Xlib is signifigantly more complex then /dev/fd0 anyway. X performs over a network. Compare that to VNC which is much closer to being just a frambuffer.

    Better to have many thin layers of abstraction implementing standardized interfaces then one BIG standard API to do everything. This is what we've learned from n years of software engineering.
  • The most exciting thing about the Unix philosopy is the way small components can be strung together (with scripts and pipes) to easily create complex applications. What if this design goal could be moved out of the realm of the command line, and directly into the world of the GUI. If, as Miguel states, the large Linux apps can't reuse code, they don't have to follow the Microsoft solution of DLLs (and the version control problems they create), we already have the mechanism in place. We just need to be true to the Unix redirection standards in the design of the larger components. With visual tools to expose the larger app's components to wiring, relatively novice users could discover the power of scripting. For example, the output of a spell checker component could be wired to a insertion point in text. Or an entire spreadsheet could be inserted into a document, using standard text and XML formatting.
  • OK. Apparently I didn't make myself clear - it provides no more application-building functionality then a raw framebuffer. Xt, an addon, provides a basic widget set, which is one piece of the puzzle. Network Transparency is nice, but it hardly qualifies X to be a whole application-development framework.
  • It's called COM. Take DirectX. Now in its 8th major revision, the library has almost been rewritten, but still maintains backwards compatibility without sacrificing size and speed. The problem with UNIX, though, is not that. Its the numerous, incompatible libraries that do the same thing.
  • Obligitory [catalog.com]
  • The suckiness or non-suckiness of UNIX all depends on what you're using for. Given that Miguel comes from GNOME, inherently a desktop project, it is understandable that he isn't to fond of UNIX, because for those tasks, it DOES suck. The problems he pointed out have long ago been addressed by COM, OLE, OpenDOC, etc. on other platforms. The desktop user and the traditional UNIX user are inherently quite different. Desktop users often use a series of different appliactions, all upgraged often, and all interoperating with each other. Desktop OSs, like Windows and Mac, and OS/2, were designed with these needs in mind. However, UNIX was designed for a radically different purpose. Lets look at the design environment of UNIX. It was used originally at the Bell labs think tank, where people used it to work on their own projects. These people didn't have a spate of appliactions to work with, they used UNIX as a standard platform for their own projects. Thus the high degree of flexibility and personal "sculptability" that UNIX enjoys. Later, UNIX found use in custom solutions such as in AT&T's network. Further still, people designing back-end servers found that the fact that UNIX ran a single application (or a tightly interwound team of applications) well for months on end made it perfect for server tasks. The "traditional" uses of UNIX all have this in comment. However, as I already stated, desktop uses are quite different. These days you have games, word processors, web-browsers, art programs, audio programs, etc, that are all used on the same system, and all have to share resources. In the traditional UNIX environment, you could get away with the multitude of libraries that the applications needed, because you were often running one fixed set of applications. However, that same method applied to the desktop user reeks havoc. This is why I complain that I have dozens of libraries on my system. Along with two glibc and libstdc++s, I have two Qts,and KDEs, one gtk+ and gnome-libs, tcl, tk, and numerous other support libraries. Problematically, I have to use them all AT THE SAME TIME. My audio programs use TK, I use KDE2, but KDevelop uses KDE1, and GIMP uses GTK. In the traditional UNIX environment that was fine. You used the machine for software devel, and you ran only one set of libs, because your apps didn't need other ones. So, UNIX does suck in ways. Though the Linux community has done an admirable job making it competitive with Windows, the design is still flawed (for desktop use.) The sheer bloat of the software is the only clue you need. What should be quite a thin, light system (after all, Linux is a fairly selvte kernel) becomes a system nearly as bloated as Win2K! However, that's just a side effect of trying to shoe-horn UNIX into something it wasn't designed to do. That said, UNIX doesn't suck. BeOS couldn't server my little brother's web-page, but that doesn't mean it doesn't suck. It simply wasn't designed to do that.
  • You're getting into a chicken and egg routine here. He's saying that the Win32 API is the standard because Windows is the standard, not the other way around. I think he's technically correct on that since Win32 forced everyone to rewrite all their Win16 software. Why did they rewrite all that software? Because everybody runs Windows.

  • I think that OOP does encourage more reuse. I think that the practices of programmers are what limit the amount of reuse. The need to position onesself corporately or a sort of competition between programmers (Look, I wrote more code) between programmers. Most universities these days teach programmers to look objectively at such situations in the courses that emphasize the management aspects of software engineering. Traditional methods of payment emphasize work done, but by what metric do you measure the work done in a program. If you reuse someone else's code, by traditional practices, you get paid less. We learned years ago that paying for every bug found and fixed was a bad idea, people programmed bugs in on purpose, so that later they could find them and be paid. If you take these factors out, OOP DOES encourage more reuse, history has taught us this.


    We're all different.
  • Isn't this akin to saying that, in the modern era of 747s and stealth aircraft, the Wright Brothers' Flyer apparently was built wrong from the ground up?
  • by Greyfox ( 87712 ) on Thursday July 20, 2000 @06:03AM (#918665) Homepage Journal
    As alt.sysadmin.recovery is fond of pointing out, every OS on the planet sucks. Some suck more, some suck less but they all suck.

    I love Linux for its flexibility. Drop the kernel in and everything else is optional. Want the standard UNIX utilities? Add 'em. It's optional. It's all optional. No one dictates that policy. That means I can install Linux on my embedded device and leave off 98% of the crap you get in a standard distribution, hack some sort of GUI out. GGI or X on GGI or X on custom hardware. It doesn't care. No one set a policy dictating things. But wait! I don't want a window manager on my embedded hardware! NO PROBLEM! I can make my own UI!

    Griping about the flexibility that makes the system great is stupid. Remember the Chinese guy from UHF? Lets all face Miguel and say it together "STOOOOPID! YOU SO STOOOOPID!"

    Tongue firmly in cheek, of course.

    Anyway, now that we've got that out of our systems, the point about component programming is valid. The text tools are designed to be simple and flexible, but the GUI is a relatively new add-on and is in some ways more primative than Windows 3.1. I've complained about the lack of a decent print subsystem myself. And GUI apps tend to try to do more than the simple text based ones. I think many people view X as nothing more than a way to keep 10 or 15 text terminals in view at once.

    Thing is, this is all going to get fixed. Several companies are working on the printing problem. Once they all screw it up and present 15 different conflicting standards, some group of free programmers will get pissed off enough to write one from scratch. X could go away as well. Much of the new software is GTK based, and porting GTK should be as easy as porting GDK and a bit of other stuff. ORBit doesn't rely on X, and most of the Gnome stuff builds on GTK.

    UNIX may suck, but unlike the competition, UNIX is going to get better.

  • UNIX is a late '70s time-sharing system buried under layers of cruft. Some of the basic design decisions are obsolete or didn't scale, and yet nobody can fix them.
    • Text-file based system administration has to go A good idea at the time; a lousy idea today. Early UNIX had only /etc/passwd. Today, there are dozens to hundreds of text configuration files. Converting them to XML isn't a fix. The OS needs a database: fast-read, slow-write, and very reliable. NT added the Registry, which was a good idea misused. This area still needs a major rethink. It's not glamorous, but it's the cause of most UNIX trouble.
    • A program is not just a file. The MacOS has the most coherent idea of what an application is: it's an executable file with attached resources placed anywhere, plus a set of preferences in a document in the system preferences directory. Deleting the preferences document is always permissable, and returns the application to its newly-installed state. This model gets rid of most of the need for installers, uninstallers, and similar crap. (The MacOS has other conflict problems, but they stem from the fact that down at the bottom, the MacOS is basically built like DOS.) This is much better than the UNIX or Windows models, which involves pathnames, registry variables, little text files, and various other crap.
    • UNIX interprocess communication sucks. In the beginning, there were pipes and signals. Trying to do anything complicated with just those, and getting the error cases right, was next to impossible. A few generations of hacks later, it's possible, but still hard. The basic problem is that what you want is a subroutine call, but what you get is an I/O operation. Trying to build CORBA, COM, OpenRPC, etc. on top of UNIX pipes or sockets is slow. Interprocess subroutine calls need to be designed in as a basic primitive. Take a look at L4 or QNX.
    • Naming is basic. Operating systems have lots of named objects, but they aren't always in the same name system. This is the current struggle in OS design, with Microsoft pushing various "active directory" concepts. Think of directories as being independent of the file system; UNIX lets you put devices in directories. Arbitrary objects (CORBA, sockets, maybe even URLs) should be placeable in the name system. And security should be built around this.
    • Somebody has to be responsible for security And it ought to be the OS. UNIX has far too much trusted code. (So does MS Windows. The MacOS doesn't even have a security model.) Go look at Multics to see it done right.
    If you get these things fixed, the upper layers can be much less messy. But so much has to be changed to fix this that it's a big job.

    On the other hand, the open source community has the advantage that any one group can lay its hands on all the source. So if some group undertook to clean up Linux and produce "Linux 2", they could do so.

  • This is probably the most intelligent comment I've read in this whole discussion. As far as the window manager running as root, that wouldn't even be necessary-- the X server already has to run as root, why can't it just chown /dev/gui to the user that started it, then let the user processes like the window manager create and delete things in there as needed? Then allowing other users access to given windows is as simple as a chmod on /dev/gui/win42 or what have you. Subwindows could be dealt with by having /dev/gui/win42 actually being a directory with as many named pipes as needed to do what we need to do, and subwindows being a window directory inside that one. Borrow the NeXT philosophy where a directory can be seen as an object in and of itself, but still also seen as a directory when needed.

    Anyone? Bueller?

    ---
  • Most of us use linux not because of technical or moral reasons but because we like the interface better.

    This is simply not true. Linux interfaces have been awful, oh so unimaginably awful, until recently. Now they've moved up to mediocre :) I don't mean this in a flaming way at all; I just didn't think I'd ever see someone stand up and proclaim Linux to be wonderful based on the user interfaces available for it.

    Windows standardized their interface and thus restricts the user.

    This is a myth, or at least we have yet to see any evidence to the contrary. Linux provides a broad, empty canvas for interface designers, yet we haven't seen anything innovative or especially slick from a usability standpoint.

    I am very afraid that this flexability which linux posseses will be deystroyed by gnome/KDE. As these projects progress more and more programs which could have been implemented on the command line are implemented in gtk. Soon I won't be able to access my settings except by dialog boxes and I will once again be trapped in windows hell.

    This is a classic overreaction. If you have access to a terminal window--heck, even OS X on the Mac will have this, though it won't be advertised to the general public--then you can do whatever you want.

    I agree, though, in that KDE and Gnome are generally poor interfaces to standardize upon. It's not at all clear why one would choose a Linux/X/Gnome combination over, say, Windows 2000 or NT. The Linux kernel is cleaner and more stable, yes, but that doesn't matter when you put millions of lines of code on top of it. Instead of the kernel crashing, part of Gnome crashes. Better? Yes, but not something to get excited about. Perhaps what is needed here is a simpler GUI that would be a better base standard than X and a minimalist window manager.

    A standardized interface means several things. It means no competition which stagnates development.

    Linux GUI development is already stagnant. Years and years are spent in an effort to get the same functionality as crusty old Windows. I wouldn't be at all surprised if Microsoft, who can certainly afford to pay respected experts what they deserve, manage to move GUIs to the next level before any open source project does.
  • "The major point is the lack of reusable code between major applications..."

    I agree that this can be a problem if you are writing a "major application" under Unix. You constantly come across problems where you think "surely this has been solved before".

    And GNOME is doing an admirable job of solving some of this problems in libraries--unfortunately they still are global solutions. You have to buy into GNOME to an unacceptable degree to get the solutions to work. For instance, you have to use GTK, CORBA, etc, etc, etc.
    --
  • Quite often newer things are better.

    Consider all the fuss people here make about the latest ghz chip and the cool looking mac range of toys.

    Older != Better != Newer
  • Simple. If you just have to dig a pond *once*, then of course the shovel is easier. But if you dig ponds every single day, then the bulldozer is easier. Remember the formula T(l) + nT(d). Do you perform the task a lot? If so, then the time required to learn a better tool is paid for by the time saved using a better tool. Seems simple to me.

    The problem with Windows is that everything is either easy, or *impossible* (or at least extremely difficult). I've worked for years with both, and i've spent far more time beating my head against a wall trying to get Windows to do even trivial tasks, if no Microsoft engineer thought of the task before i did. The joy of Unix is that i can easily combine tools to perform just about any specialized task i can think of.

    Yeah, i'm a power user. I'm an experienced programmer. But AS A POWER USER, i consider Windows to be downright user-hostile. At this point, i would not take a job that required me to use Windows rather than Unix/Linux as my primary interface. I'm far, far more productive at my Linux box.
    --
  • No. Unix does suck in this area.

    Your understanding of components is limited. "Stringing together programs" is not components.

    Why? Well, a lot of reasons. One: There is no error handling. Piping text files is one way. There is no way to pass error messages "back" down the pipe. Have you ever tried to debug a complicated shell script that uses a bunch of pipes?

    Two: Pipes can't really pass structured data. How do you push a linked list through a pipe? How about a hash table?

    Three: All of those "components" are really programs, and their communication is inefficient. Running your example would start at least three full-blown programs, and these programs have to communicate through text files - so data gets copied repeatedly from one address space to another. This is inefficient. If it was a real program using real components, the data would be loaded into memory once, and each object could access it. Much faster!

    Yes, on Unix, everything is a file. Probably a text file. So Unix is really good at text files. Did I mention that it's also really good at text files? Yup, if you want to process text files, Unix is great!

    X Windows is only the most OBVIOUS problem, because when you are working with graphics, piping files just doesn't cut it anymore. But if Unix had been built on a real component model, and not just the idea of piping text files around, then everything would be better, even at the command prompt. And, XWindows could have used a more powerful system, and then it would have had real cut and paste, real printing capability, real font handling, etc.

    So if you agree with Miguel about X, you should also agree with him about Unix in general. The problem with Unix is that the unidirectional piping of unstructured files is not a powerful enough communications model.

    This is why Miguel likes Windows: despite all the problems, Windows at least has real components. For TEN YEARS NOW, since Windows 3.0, you have been able to paste a complex object like a drawing into another complex object, like a spreadsheet.

    Unix has only begun to pick up this capability in the last year with KDE and Gnome! Ten years late! (More if you consider the Macintosh!)

    Torrey Hoffman (Azog)
  • by codemonkey_uk ( 105775 ) on Thursday July 20, 2000 @04:29AM (#918692) Homepage
    Its open source. Do something about it. If you don't like it, change it. If its broken, fix it. Its the open source mantra.

    Actually, Miguel is one of the few people who is in a position where doing something about it is actually feasable. Whatever happened to that KDE & GNOME common component archetecure? That would have been a step in the right direction.

    I do believe that there is to much ego flying about for a lot of good things to get done. It takes a big man to climb down and say, okay, lets merge. Lets reuse. You can do it better than me, and with OS development kudo is currency, and to loose ego is to loose currancy.

    Hmmm... does Miguel have the courage to take a step towards consolidation?

    Thad

  • by 11223 ( 201561 ) on Thursday July 20, 2000 @04:30AM (#918695)
    The problem with *NIX (and he really doesn't mean *NIX - there's quite enough shared code in a console-only system, the problem is with the X apps he named) is that X windows is just an overgrown framebuffer, not an actual graphics and development kit. If you look at something like BeOS, it provides a whole bunch of "servers" (a microkernel design) that handle different functions, but X windows was an overgrown framebuffer stuck on top of a command line to provide a clock, a load monitor, and a terminal.

    The terminal does just fine with the components it has. There are quite a few shared libraries, and for (for instance) printing, everything uses lpr - plain and simple. But a drawing model like X does not a application kit make.

    Personally, I think that the best approach for an application development framework is a server-based model like BeOS. In Windows, programs duplicate functionality that's handled by one server in BeOS. Linux (and UNIX) is a great command-line environment, and provides a rich environment on top of that. Just don't use X for anything more than xterm, xclock, and xload.

  • Code-reuse and encapsulation/componentization is in direct opposition of monolithic interdependencies. If they are actually doing this than I suggest they are not designing correctly. Encapsulation should *avoid* interdependency, not increase it. This is all partly due to the awful gui semantic agnostic X. I say stop attempting to build good stuff on mushy cruft. Rip out the cruft and start with a solid foundation. That GNOME and KDE are *another* level on top of widget toolkits, window managers, and X is just too much. Some truly common infrastructure should be built. Not just happily chugging along on divergent paths, than building weird bridges to be "compatible".
  • by WebSerf ( 91322 ) on Thursday July 20, 2000 @07:59AM (#918703)
    I think Miguel is ignoring the fact that when Unix was developed there was no concept of "one huge app". The whole philosophy was based on small utilities that you chained together using pipes. This constraint was enforced by the limited hardware available at the time. In that sense the "atom" of reusability is the utility program itself and so Unix really did have good reusability for what mattered then. Add to that the standard system libraries and you had a ready code base for the creation of new utilities.

    Today, people want to build GUI apps and he is right to say that UNIX lags behind Windows in reusability in that regard. But this is clearly not a "design flaw" just a lack of a widely used toolkit of common objects.

    Miguel is also ignoring the fact that a closed, tightly controlled platform like Windows will always have a higher level of uniformity (and reusabilty) than an open platform which must rely on de facto standards rather than the "king's edict" so to speak. In that sense then openness is a design flaw. No, I don't buy it either... Gnome is on track to provide the kind of high level reusable objects he wants. He should stop whining and write code.

    --


  • >"Matthew Gream is a goat fucker."
    >-Richard M. Stallman, 1996

    You should try it sometime, it may relieve that anger that you have built up.

  • Although I agree that in terms of ease of use, reusing code in the Microsoft tradition is a good thing, I'm somewhat worried about having too much of it. Maybe I'm wrong, but increased standardization and code re-use eventually just leads to a more homogenized user experience. If Miguel got his wish, wouldn't we (non-programmers, that is) be more tied down to the standard framework? Isn't it dangerous to put all your eggs in one basket like that?

    Sorry if this makes no sense; I just got to work, and it's early. ;-)

  • I dont agree with your opinion, and on one of your points in particular I have to speak out. You state "text-file based system administration has to go" but personally I'd rather have that than some kind of opaque registry. I dont mind if somebody builds a nice easy GUI interface to those files and I may even use it if it makes my life easier but once something breaks I want those files to be readable and FIXABLE with a text editor and the mark 1 eyeball - so that when the system is flailing around in agony and crashing about my ears I can get it into single user mode, grab a tool that I can count on to work even when everything else is pretty much broke and at least get my system to a point where it will boot normally. I'm sure you can come back to me and point out that simple command line tools could be built to do that with any file format but it misses one big advantage of plain old text - the humble comment. If all my config files are pretty much self-documenting (which they should be if I'm doing my job right!) then I can do things like


    # yes I know it aint standard, but dont 'fix' this
    # it breaks xyz if you do!

    and be a little more confident that I or a colleague wont forget that little wrinkle and step in the same gotcha later.
    # human firmware exploit
    # Word will insert into your optic buffer
    # without bounds checking
  • by Carey ( 2195 ) on Thursday July 20, 2000 @04:31AM (#918715)
    The statement that UNIX has been wrong from the beginning is not what he said.

    What he said is that there is no innovation going on in UNIX and that number of its fundamental features while attractive to our community, are preventing the whole world from using the operating system.

    He cited Apple's work on MacOS X as an example of a team that changed some of the fundamental kernel designs on behalf of "end-users".

    Miguel's big point is that there isn't a component model and code reuse simply doesn't happen. He is right on the money with that.

    However I don't know about the solution of just copying COM/ActiveX/OLE, especially when Microsoft is now dumping COM in favour its .NET architecture.

    I suspect Java is in the Linux desktop future whether people want to admit it or not. The Java2 integration on MacOS X that was demonstrated at JavaOne proves how much Microsofts component model for applications is obsolete.

    In the rest of his keynote he talked about innovation in specific applications such as mail and the whole INBOX/foldering problem. I hope GNOME (and now SUN and StarOffice/OpenOffice) can address some of the design problems with Microsoft Office.

    He did say UNIX sucks, and he is correct, many things about it do, but there is suckage on every platform. His point was we have to fix the things that suck on UNIX and he is not advocating re-doing it from scratch.
  • by sillysally ( 193936 ) on Thursday July 20, 2000 @04:31AM (#918719)
    If we believe Miguel's opinion of Linux vs. Windows development, Linux is going to lose. In fact, his argument is so strong that we can see that Linux won't even be today where it is because five years ago Windows was even farther ahead with more reusable code.

    More evidence of Miguel's genius can be seen in his critique of Unix in general. Unix is not a platform of innovation. Take the biggest development in all software markets in the last five years: the internet. Unix could never have produced the innovation of the internet...

    Miguel's a little confused.

    It drives me nuts when people who are a little bit smarter like Miguel, start to think they are really smart, because while he can see problems, he is still not smart enough to see solutions. Allowing for many many window managers is not a mistake, it's the trend: think about skins. No, the problem is that the developers who are writing all the window managers keep starting from scratch, or pay little attention to the other window managers. For example, I like the focus to follow the mouse. I'd like to set that one time in one place, then experiment with different window managers to see which I like (today... :) But you see? That's a simple solution to a problem. There's no need to throw the baby away with the bathwater, which is what Microsoft did. Microsoft was a unix systems house back when they produced DOS, and many features of DOS were modelled from Unix. It took them years and years to reintroduce simple things like memory management and multitasking, and then they set off to create NT, an OS that nobody even wants to clone.

    Yep, it's true that some areas of Unix are very weak, like printer drivers, but that's more a reflection of the culture: Unix isn't used on office desktops much. Windows has equally glaring deficiencies: think of how much Windows code gets "reused" every day by hackers exploiting the security holes :)

    Nope, Miguel, you are not onto anything big, just another Dvorak in a different suit.

  • by DG ( 989 ) on Thursday July 20, 2000 @04:33AM (#918720) Homepage Journal
    While there are some nice features about components, like anything else it's possible to have too much of a good thing.

    Taken to extremes - like our good friends in Redmond - you wind up with many, many applications depending on a large number of common components, with (here's the kicker) at times incompatible APIs. Need BeltchWord 5.0 and FlatuanceDraw 6.2? Can't do that if they each want different versions of the same component.

    And then you get situations where an application upgrades a component that the OS/Window Manager depends on.... version control lunacy.

    I believe this is called "DLL Hell" in Windows circles.

    No thanks Miguel. I like and use GNOME, and I look forward to useful things like a common GNOME printing model, but I also very much indeed like the current UNIX way of doing things with regards to the window manager, X, and the kernel.

    Some may see 20 years of development as "stagnant" but I see 20 years of continuous evolution. Cockroaches haven't changed much in 20 million years, because they don't have to - they're pretty damned efficiant as shipped.

  • The problem is, companies (and shareware authors!) don't want to deal with install problems so they say "I KNOW my software works fine if X,Y,Z versions of X,Y, & Z .dlls are installed. So I'll put those in my package and set "always overwrite." Even worse, they'll up the dates and version numbers so later installers that DO version and date check fail to overwrite because they'll say "oh, that's a newer version!". I've personally verified a program (commercial!) that wrote a data access .dll with a fraudulent date and version on it, I can tell you it does happen. Having spent three weeks dealing with installer problems for a 200-seat rollout of a program that only took two weeks to write, I can sympathize. As I said, I don't think Win2k's approach is perfect, but it's better than relying on the ethics and knowledge of every joe schmoe who uses tyhe P&D wizard in VS and uploads that freeware program somewhere! I know Win2k is a LOT more stable, and I attribute the system .dll protection.
    ---
  • I think the most important of your points is UNIX IPC sucks. You are right that you want a procedure call, but what you get is an I/O operation. And the reason for that is that unsafe languages like C/C++ require hardware memory protection to keep them from crashing each other. So, to move any data, you have to go through the kernel or fiddle with shared memory.

    Not necessarily. What you really need is some specialized hardware support for IPC. Pentiums and above have some of what's needed; look into "call gates" and "task gates". Sparc 9 and above have more of it, as a spinoff of the Spring OS project at Sun. In time, I think we'll see this, called "COM/CORBA acceleration". It's about time for a CPU enhancement that benefits somebody besides gamers.

    All those COM/CORBA/OpenRPC/JRI mechanisms are, at bottom, subroutine calls. There's a slow setup/object creation/directory mechanism that has to operate before you make the subroutine call, but once that's out of the way, everything looks like an ordinary subroutine call. What's needed is a very efficient mechanisms for those calls once all the setup is in place. L4, QNX, EROS, and Spring all achieved this in under 100 instructions without special hardware, so it's possible. What's needed is to get the cost down to the level of a DLL call.

    Once you have this, writing component software looks much more attractive. Right now, there's a big performance penalty for breaking an app into components like this. If that went away...

  • by jetson123 ( 13128 ) on Thursday July 20, 2000 @08:11AM (#918738)
    Yes, C/UNIX has ceased to be a platform for innovation. In fact, arguably it never was a platform for innovation.

    But if Miguel wanted to help improve the situation, why did he go off developing such a huge software project in C on UNIX? It is C that makes component based development such a pain. C lacks even minimal support for component based development (e.g., no dynamic typing, no reflection), and it is impossible to make large, component based systems in C both robust and efficient: there is no fault isolation--a bad pointer in one component will crash other components unless you put them in separate processes.

    The answers to these problems are well known. Systems like Smalltalk-80 and the Lisp machine were fully integrated, component based environments where everything talked to each other. And almost any language other than C and C++ is better for component-based development and provides reuse.

    Microsoft does not have the answer. Microsoft's component model, COM, has very serious problems. It's complex because the languages it is based on don't have any support for component based development. And despite its complexity, it is still dumbed down because anything else would be umanageable in C/C++. And it has no fault isolation, meaning that if you load a bunch of COM components and your program dies, you have no idea what went wrong.

    In fact, UNIX had an excellent, reusable component model: command line programs that interchange data with files. That's no good for building a graphical desktop, but it was excellent for the UNIX user community--people manipulating lots of data. And that model has been extended to graphical desktops and networked systems in Plan 9 and Inferno, which also address many of the other problems with C/C++ through Alef and Limbo. Or, alternatively, Objective-C and OpenStep managed to build something that support powerful reuse and component based development on top of UNIX. And Java is excellent at supporting both component-based programming, reuse, and fault isolation.

    If Miguel genuinely wants to improve the situation, why isn't he using the tools that will let him do so? Why isn't he learning from the long history of component-based development that preceded both him and Microsoft? Why is he copying Microsoft's mistakes and mediocrity? Why isn't he supporting tools that genuinely make a difference rather than encouraging the use of tools (C/C++) that were never intended for this kind of work?

    People say about democracy that "it is the worst form of government, until you have tried the others". I think the same is true about UNIX. Gnome and GTK help improve the usability of a flawed tool. As such they are really welcome. But by not addressing the root causes of the problems, we'll probably be here discussing the very same problems again in another 15 years, because everything people complain about UNIX was known 15 years ago, nobody fixed it, and it (and its clone--Windows) still became immensely popular.

  • by (void*) ( 113680 ) on Thursday July 20, 2000 @07:13AM (#918739)
    I see, and I suppose that MS has a Steganographic File System [cam.ac.uk]?

    All this innovation for the sake of innovation is stupid. Innovations must solve problems. Go ask Ross Anderson if he how he designed the system. Did he slap code together and say - there I call it the StegFS, or did he pose a problem about the issue that of encryption does not address, and then propose a solution.

    OTOH, MS coming out with "focus" control technology is just that - a hammer in search of a nail. MS, in their backwards marketing-directed software development, is causing the software inductry to go in circles - going nowhere.

  • by delld ( 29350 ) on Thursday July 20, 2000 @08:21AM (#918771)
    Rather than having a nice set of reusable stock libraries, such as the win32 api, or gtk, we need a standard broad interface definition. I want to take program Z and replace its file browser dialog. I want to have a standard OS wide spell checking untility. I want to be able to change widowmanagers. I want to be able to drive every single program though scripts. I want to change my ui. I want to change all of this on a whim - some of parts should be changable dynamically, others might need a recompile. I do not want to be stuck with some guys "good idea" of a usable interface.

    My point is that most people here are saying we need a complete set of standard libraries. I am saying we need acomplete document describing how a standard libaries should interact. Then we build hundreds of libraries to this standard. The unix way, where everything is a file is a very basic implimentation of this - say I do something like: someprog | sed | cut | awk > file (flags striped), for some task, but find that set|cut|awk is not poweful eneugh. An hour later I can do someprog | perl > file. No changes to someprog no changes to the file. No changes to the mythical pion who depends on post processing the output of someprog.
    I want options. I do not want some idiotic stock library designed my some fool.
  • by grahamsz ( 150076 ) on Thursday July 20, 2000 @04:38AM (#918773) Homepage Journal
    A few observations:

    unix wasn't the first operating system in the world

    unix will not be the last either

    as time goes by better ways of implementing things are discovered. Whilst windoze might not have the best underlying operating system I feel that it does have a far better user interface than any linux/unix variant. Sure gnome looks pretty but that's just your aforementioned flash.

    To be fair windows does make it possible for end users to set up and work a pc without the amount of technical knowledge required to install linux.

    Lets face it, most people do find dragging files into a bin easier than remembering to use "rm -r foldername". Personally i like command line stuff but that's just me.

    If windows is so bad then why do more people use it than linux?
  • by shippo ( 166521 ) on Thursday July 20, 2000 @04:39AM (#918779)
    I've felt the same way for a long time.

    Most Unix applications share little or nothing with each other, save for the C library and X libraries. Everything else appears to be an attempt to re-invent the wheel, sometimes coming up with an eccentric triangle instead.

    The main advantage is that if a security hole or bug is discovered in a library, an replacement library will resolve the majority of problems. A certain $oftware company does this a lot. The other advantage is that it saves memory.

    Gmome appears to be doing more than KDE in this field. Run ldd against a typical Gnome application, and a whole host of component librarires will be linked in - Imlib and others for image rendering, GTKXmHTML for HTML, Gtk and libgnome of course, and so on.

    Gnome is standardising on which libraries to use. Unix libraries have become fragmented, with many features duplicated between competing libraries. The present situation elsewhere is a mess, due to it not being controlled.

    The only other environment I can see that does something very simillar is Perl, with standard modules available on CPAN. Python may do the same, but I havn't looked at Python closely enough.

  • by codemonkey_uk ( 105775 ) on Thursday July 20, 2000 @04:39AM (#918782) Homepage
    So what does that have to do with OOP?

    Althought you don't expicitly state it, you seem to be implying that OOP encorages more reuse than other programming paradigms, now, while OOP does encorage more reusable code to be written, it had not been shown that this actually generates more reuse in practice.

    Thad

  • If UNIX was built wrong from the ground up, why has it survived for the base, oh, 30 years?!?
    My maxim about antiques is: "It's not good because it's old, It's old because it's good."

    UNIX did a lot of things right. If you look at what Martin had to say, he's looking for more code reusability. Unix did it at the program level, now he's asking for it to be done at the functionality (sub-application) level. He's actually asking for an extension/deepening of a core UNIX principle to where we could/should have been working it a long time ago.

    It just got a bit stagnated because of the closed-sourcing of UNIX back in the '80s.

  • by raptor21 ( 47540 ) on Thursday July 20, 2000 @08:30AM (#918793)
    What he is saying is.....

    I have a crapy graphics card so my whole computer is a piece of crap!!!

    Just because UNIX lacks in some resuable code in it's graphical shell it sucks. What about the fact that I can do almost everything i need to maintain a system over a serial port.

    Unix needs a lot of changes inorder to become a desktop OS. UNIX was designed for mainframes Three decades ago. X and desktops came into existence decades afterwards. Miguel's analogy is like saying a 1960 automobile doesnot have airbags so it sucks. But the basic engine and chasis design is the same only todays cars have improvements.

    Resuable code.... Just count the number of OSes out there that were built using a UNIX kernel.UNIX must have done something right.

    I wouldn't say X sucks, I would say X is too old for todays standards. Just like a PDP11 is old by today's standard.

    What the *nix world needs a newer grphical shell that defines a standard API that people can utilize. You can write all the Window Managers you want as long as you confirm to that API.

    The API should include:
    1) Unified standard printing architecture.
    2) Resuable components for the primary functions of applications.
    3) a standard for user interface (menu options e.t.c) Like edit->prefrences and not tools->options and file properties and every other place .
    4) A standard method for software installation. Like src goes here and binaries go here and so on.An API to make installation easy such that icons get put in the menu and links get crated automatically on the desktop.

    All this and many more standardizations are key to Unixes entry into the desktop. Standardization doesnot mean one window manager but that the basic UI should remain consistent.
    The only reason people like windows (Yes ,seven out of ten people I talk to think windows is great) is that it functions the same every where it runs. Most people don't want to learn every option on every application and every platform. Trust me i have experience with computer novices.They want consistency.

    Till we realise this and look at it from a consumer point of view I don't see unix or linux on every desktop in the world.

  • by Shotgun ( 30919 ) on Thursday July 20, 2000 @04:41AM (#918796)
    You need the features provided by ls combined with the features of grep....

    ls ???? | grep ?????

    You want to combine a whole bunch of components? You can use a shell script or even perl.

    We've got reusable code running out our ears.

  • >X has more or less followed the Unix paradigm where 'everything is a file' - which really means that everything is a flat, unstructured lump of text

    That's not really what "everything is a file" (EiaF) means. EiaF is really a pretty low-level thing, meaning that all sorts of objects - files, devices, fifos - in a common namespace and are accessed via a common set of syscalls - open, close, read, write, ioctl. This was actually an advance over earlier operating systems which often required that you use different syscalls to get different kinds of descriptors for each kind of entity, and which had multiple namespaces as well. Ew. You can see the power of EiaF not so clearly in UNIX itself, which contains many deviations from the principle, as in Plan 9, which was the "next act" for the UNIX principals.

    There are a couple of other principles that you seem to be confusing with EiaF, and I think it's worth discussing them too. One is the idea that files should be unstructured. Again, this is a low-level idea, this time referring only to the "physical" layout of files and to the filesystem interfaces. As a filesystem designer and implementor, I can say this principle is very important. Filesystems have quite enough to do without having to worry about different record types and keyed access and so on - as many pre-UNIX OSes (most notably VMS) did. Man, was that a pain. What gets built in user-space, on top of that very simple kernel-space foundation, is up to you. More complex structures have been built on top of flat files since the very first days of UNIX (e.g. dbm files).

    Another related principle is that data should be stored as text whenever possible. This is an idea that's gaining new life with the widespread adoption of XML to represent structured data, and again it's a good one. Doing things this way makes it much easier to write filters and editors and search tools and viewers and such (or to perform many of these tasks manually) than if the data is all binary. It makes reverse engineering of file formats easier, which is a mixed blessing, but it also makes manual recovery and repair easier. Converting to/from text also tends to avoid some of the problems - endianness, word size - that occur with binary data. Obviously there are many cases - e.g. multimedia files - where conversion to/from text is so grossly inefficient that it's not really feasible, but in very many other cases it's just a pain in the ass for the next guy when some lazy programmer decided to dump raw internal data structures in binary form instead of doing it as text.

    In conclusion, I'd say that by all means people should try to retain the structure of data. Even better would be if the means for manipulating data could be provided and linked to the data itself in some standard way, like OLE/COM does in the MS world. At the very least, even without a common framework, it would be nice if more programmers would provide libraries with which to manipulate their data files. But please, let's do all this on top of text wherever possible, and let's do that in turn on top of a flat-file kernel abstraction within a single namespace. These are some of the more important principles that led to UNIX being such a success.

  • by DJerman ( 12424 ) <djerman@pobox.com> on Thursday July 20, 2000 @08:39AM (#918810)
    A *nix version of Visio, for instance, could spew all the information you need about the diagram as a text stream and, as long as the format and structure of that stream are documented, you would have all the functionality that OLE provides.

    Close, but not quite.

    What's really needed is a component model (like Bonobo) and a standard URI-type reference that defines the component in terms of the content to be displayed, like OLE uses. So, if you cut-and-paste from the diagramming tool, you should get a snippet of XML that identifies the Bonobo component that is needed to display and/or edit the diagram, and the description of the diagram data. That way your componentware program can display the diagram exactly the way the diagramming tool can.

    In addition, Windows permits various rendered versions of the data to be included in the clipboard structure, so in the hypothetical Linux example your XML snippet would probably define:

    A text representation (required)

    A Bonobo reference with data (optional)

    A PNG or other graphic (encouraged)

    A space for both standardized and application-defined extensions (SVG, MPEG, binary data structure, URL, etc).

    That would be pretty much analogous to the Clipboard. Ideally, a negotiation could take place to prevent clipboard-overloading (just the Bonobo invocation interface and the minimal definition of the clipping bounds is passed to start, and the request is resolved between apps without the framework in the way), but that would require sharing the clipboard-access code :-)

    Miguel and the rest of you are, of course, free to attend to the small matter of implementation :-

  • by Black Parrot ( 19622 ) on Thursday July 20, 2000 @04:44AM (#918833)
    Unix's problems come from its longstanding approach of not deciding policy. The kernel does not decide policy; neither does the C library, the X libraries, or the window system in general. The people who decided that X users could pick their own window manager created a situation where there were many, many window managers to choose from; "they were smoking crack."
    So what's he offering to do? Start "deciding policy" for us? Is this a thinly veiled excuse for heavy-handed GNOMification of existing apps like xscreensaver, rather than the more sensible solution of letting them be visible through GNOME?

    But the real problem, according to Miguel, is that the Unix approach does not lead to any sort of significant code reuse. A list of modern applications was presented; it included names like Netscape, Acrobat, FrameMaker, StarOffice, and others. The amount of code sharing between those applications was pointed out as being almost zero.
    Well, duh. Did he expect independent commercial software shops to share their code with each other?

    Miguel is an open admirer of how Microsoft does software development.
    Someone please tell me this is a belated April Fools joke!

    He goes on to make reasonably valid points about how "reusable components" are available under Windows. What he misses is that this puts other software shops completely at the mercy of the components' owner, Microsoft. Is he proposing a Unix where everyone is similarly dependent on GNOME's components?

    OK, GTK+ and Qt provide some nice reusable components. The advantages are obvious. I use them myself. So why is he dredging up all this irrelevant/clueless/scary stuff?

    I am a GNOME user, and often defend it when it is unfairly maligned, but I don't think I like the way this is headed. No, not at all. Hopefully he's just talking out his ass rather than presenting a carefully thought-out position.

    --
  • by CMiYC ( 6473 ) on Thursday July 20, 2000 @05:01AM (#918844) Homepage
    We've got reusable code running out our ears.

    Did you read the even read the article before posting this? When he said we are lacking resuable code he mentioned APPLICATIONS like Acrobat, Staroffice, and Netscape. Aside from the C libraries, there is *NO* re-used code between any of those applications. His example was Printing. Each application has its OWN printing system, configuration, and method of working. The sad thing is, they all pretty much do the same thing... generate a Postscript file.

    He isn't talking about ls, grep, cat, cut, paste, and UTILITIES like that. He's talking full-blown applications. You know, applications...the things that people have to have to USE their computers.

    ---
  • by Phaid ( 938 ) on Thursday July 20, 2000 @04:54AM (#918874) Homepage
    One of the main reasons Unix is so fragmented and inconsistent, which really is what he's complaining about, is that the whole system (kernel, libs, user interface) never been under a single entity's control. Someone above cited MacOS X as a great example of what Unix can become if it's done right. This is true -- it's easy to be consistent when a single entity controls every aspect of the platform. The problem is, that's not what most Linux users want.

    Where he is completely wrong is his claim that Unix is no longer a platform for innovation. He's got that completely backwards -- indeed, the whole reason for the inconsistency of user interfaces is the very openness and relative simplicity of Unix. Each layer is separate from the next, so it's easy to write a new GUI system on top of the OS without changing any of the underlying layers. And people have done just that, which has led to several generations of X and other apps lying around (Xaw, Motif, OffiX, etc) -- people see a problem with the existing GUI and they reinvent the wheel, leading to a proliferation of incompatible interfaces.

    Hmm, just like KDE and Gnome.

    The upshot is, because it's open, we have a choice. And choice can lead to inconsistency. So if he wants to work on a platform where everything will always be consistent, he can go work for Apple or Microsoft. Otherwise, he'll just have to make Gnome so good that no one will want to use anything else, because there isn't any way to shove things down people's throats in the *nix world.

    And that's a Good Thing (tm).
  • by CountZer0 ( 60549 ) on Thursday July 20, 2000 @05:06AM (#918875) Homepage

    As a developer I refuse to link my applications with GNOME because it has taken a few good concepts and gone WAY overboard. GNOME initially seemed to be a set of developer guidelines to promote a common look-and-feel. A few "meta-widgets" were created on top of Gtk+ to promote this. (gnome-stock-this and gnome-stock-that)

    This was good. Then someone decided to go even further. More widgets where added. Many of these widgets should have been added at a low level (read Gtk+) but instead where added in at the GNOME level. Now you have widgets that depend on gnome-libs and a fairly incestious circle is starting to emerge where GNOME depends on GNOME and its getting so complicated that no developers I know are willing to shackle thier projects to the great beast that GNOME has become.

    Miguel and Co. can't see the forest for the trees. I recently ripped the GNOME out of GtkHTML and created CscHTML (http://www.cscmail.net/cschtml [cscmail.net]) Miguel and several of the other GNOME developers couldn't comprehend why anyone would do such a thing. They couldn't understand the need for a non-GNOME dependant HTML widget. They couldn't agree that a "Gtk Widget" (GtkHTML) shouldn't depend on GNOME. Circular dependancies are a bad thing. GNOME depends on Gtk. GtkHTML depends on GNOME. Chicken, Egg?

    Code re-use is a good thing in moderation. Not every hunk of code needs to be a re-usable object, and interdependancies can be bad if they get out of hand (which they clearly have in the case of GNOME) Miguel has stated many times that the dependancies in GNOME will only GROW as time goes on. He sees interdependancy as a wonderful thing, and is so hell bent on code re-use that he is turning GNOME into a huge monster of code that no one wants to link to because no one wants to depend on 20 or 30 different libs. GNOME needs to be split, some of its libs more appropriately belong in lower level widget sets (such as Gtk+) and some of its items should be stand alone utilities. Trim the fat from GNOME and maybe developers would start to use it again.

  • by RickyRay ( 73033 ) on Thursday July 20, 2000 @05:12AM (#918881)
    Probably the _only_ reason M$ has done as well financially as it has is reusable code. While I love Linux, I find it hard to write commercially for it since nothing you write ever seems to be compatible. Regardless of the toolkit you use (KDE, Gnome, etc.), it's always the wrong one for somebody. Compatibility = show me the money!

    There actually was a Unix derivative that did it right, but didn't catch on: NextStep. BSD kernel, with incredible development tools and standard libraries. With it you could throw together a professional application in hours/weeks instead of months/years since it handed you all of the primitive elements you could ever need in a consistent way. Much of Java was actually inspired by Next's tools (to be honest, Objective C is actually superior in some aspects to Java, and yes, it's OS-independent). Whether they admit it or not, all of the modern development tools (KDE, Gnome, M$ Visual Studio, etc.) are using more and more of the ideas inspired/stolen from NextStep.

    It will be interesting to see how Apple's move to a NextStep derivative works out. Due to the fact that they're working to maintain backward compatibility, MacOS X is probably an inferior design to the original NextStep, but certainly an improvement over existing MacOS versions.
  • by Carnage4Life ( 106069 ) on Thursday July 20, 2000 @04:56AM (#918903) Homepage Journal
    Wow, it seems Miguel was more taken by Microsoft and COM/COM+/DCOM than was obvious from the last time he mentioned components on slashdot [slashdot.org]. Miguel is right that Unix would benefit from a component model but he needs to put things in historical context.

    COM is descended from Object Linking and Embedding which was a way to have objects created in one application to be reusable by another. Basically MSFT's entire component revolution can be traced back to the "drag and drop an Excel spreadsheet into a Word document" problem. Everything that has occurred since then COM+ (reusable components independent of language), DCOM (distributed reusable components) and now .NET (cross language inheritance of objects/components) can all be traced back to trying to solve that problem and variations thereof. The early implementations of COM were not some grand ngineering effort to great a modular componentized system but sophisticated hacks to solve the drag N drop problem. This is not to say that MSFT's COM is has not come a long way, after all it has enabled them to create what has been described as the largest software engineering feat of all time [inet-one.com]. 35 million lines of code and counting.

    Now on the other hand, Unix applications until very recently did not have the cross communication problem that Windows apps had. Everything is a file, if I want to communicate between applications I simply use a pipe. All files I could possibly want to edit can be viewed in Emacs. To put it simply there was no need for a reusable component model simply to share data between applications.

    Now decades after Unix was invented (which predates Windows and COM by over a decade) maybe time has come for that paradigm to shift.

  • by angelo ( 21182 ) on Thursday July 20, 2000 @04:58AM (#918914) Homepage

    And a mantra is all it is.

    There is a very small core taking care of the software. A lot of us are users who at the end of the day working with computers, perhaps just want a free OS to check mail and surf the web. For the most part we don't want to put in another 8 hours debugging un-mapped, un-documented, and un-planned code. For the most part we run the most "stabile" version in a program, collected by a package tool.

    Every once in a while we may compile something. But for the most part, we have neither the time, nor the inclination to code. This may explain the popularity of Netscape 4.x, AND the lack of programmers for Mozilla. A lack of eyeballs is due to both the "works good enough" mentality from years of commercial OS use, and the above mentioned apathy.

    If you complain, then fix it. If you can't fix it, find someone who can, or email the primary author. If they give a nasty response, then use another program. This is certainly possible with 10 ICQ programs, 5 Napster clones, 3 Gnutella clients, and 15+ browsers. There is your freedom.

  • by Chops ( 168851 ) on Thursday July 20, 2000 @05:13AM (#918916)
    In my view, a lot of the things Miguel is talking about stem from people abandoning Unix's traditional lots-of-little-pieces philosophy just because what they're building is a GUI application. We need to go back to the shell-scripts-and-files analogy, instead of copying Windows' APIs-and-shared-libraries model.

    Think about it -- it's silly that GUI programs are calling something that looks "internal" to them to pop up a dialog box. They should be issuing a shell command, like 'dlgmsg "Your repartitioning is complete." -b OK'. Or 'dlgmsg "Do you want to purge your deleted messages?" -b Yes -b No'. /dev/proc/ is massively useful; why don't we have /dev/gui/? It seems to me that the whole Window Manager Bloating Wars came about because we chose to ignore the features of Unix that would have made it easy. Why do we have window handles instead of files (i.e. named pipes created by kde)? Why is changing a window's menus any more complex than 'menubar /dev/gui/win46 -m 0 File -mi 0 0 Open...'? Why is listening for window events harder than parsing /dev/gui/win46?

    I know it's a hell of a lot more complicated than that, of course, and I can see a lot of flaws and complications in the above... but hell, maybe the window manager should have to run as root anyway (sarcasm). Does anyone know of a project that tried to do something like this?

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...