Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

Preinstalled Hurd Now Available 324

Roger_Wilco writes "The GNU Web site is announcing that Spacetime Systems will now install GNU/Hurd as well as GNU/Linux. Hurd is Object Oriented, unlike Linux, so it may be a superior system in the long run."
This discussion has been archived. No new comments can be posted.

Preinstalled Hurd Now Available

Comments Filter:
  • It's been on the gnu.org page for a week or two.
  • How much would it take to make Linux's kernel
    object oriented? Are there any specific reasons
    why it isn't already? What's kept it from moving
    into the wonderful world of oo?
  • I have yet to have anyone convince me that there will be any substantial practical advantage to the HURD over Linux in the long term. And we've been waiting for it to arrive for /at least/ 10 years.

    I'm open to being convinced - but will need to be convinced.

  • by Rombuu ( 22914 ) on Tuesday February 01, 2000 @01:54PM (#1313795)
    Hurd is Object Oriented, unlike Linux, so it may be a superior system in the long run.

    Huh? Why would you say something like that? I mean I can see arguing microkernel v. monolithic kernel archetectures, but saying an OS is better than another due to its programming methodology?

    Besides in the words of John Maynard Keynes, in the long run, we're all dead.
  • uhh..
    1. Linux isn't even 10 years old.
    2. GNU isn't even 10 years old
    3. HURD hasn't even been in development for half that time.

    Get your facts straight before you flame.
  • by Gokmop ( 147245 ) on Tuesday February 01, 2000 @01:57PM (#1313800) Homepage
    This is very, very cool. The HURD is awesome, and I for one am going to be following it very closely as it develops.

    For all of the people who are now going to bash the HURD as completely useless and still in development, keep in mind that linux was once at the point that the HURD currently is. It's in development, there are probably more things to be implemented than things that have already been implemented (although any developers feel free to jump in and comment at length about the status relative to the long term goals of the project) but the project is VERY cool.

    It doesn't *compete* with linux in the way that you might think - it's sort of a UNIX, but it's based off of the Mach microkernel, which is a very different way of going about operating system design than your typical UNIX.

    Let's face it, of course linux rocks, but in terms of operating system design concepts, linux really didn't change much, it just took ideas that were already out there and created an excellent implementation of said ideas. The HURD on the other hand is based on a different worldview of operating systems and has a lot of promise.

    Besides, how bad can it be to have another choice for an operating system? You don't have to use Linux, BSD, Win95, or HURD, but it's good to know that the choice is out there. Personally, I don't know if I'm ready to join the HURD, but I'm watching the code, and I plan to jump in and contribute where I can.

    And to all you people who say that the term GNU/Linux is a total travesty of fairness on the part of the FSF, the HURD is pretty much the last component of the GNU system that is needed. Whether they choose to call that GNU/HURD or just HURD is up to them, but if you look at the HURD in terms of the framework of GNU's work, it explains a lot about why Stallman wants to call Linux GNU/Linux. (And I agree with him)

  • by BaptistDeathRay ( 126948 ) on Tuesday February 01, 2000 @01:57PM (#1313801) Homepage
    that as long as a software project is intelligently planned and developed, it really doesn't NEED to be OO. OO programming has some advantages to development (especially when it comes to designing a UI) but its main advantage is that if you use OO programming you MUST be more careful in your design (in order to really get any benefits OO programming gives you).

    As far as I understand, the Linux kernel is very consistently thought out and new additions are considered very carefully before being implemented. So is there any really significant gain to an OO kernel?


    +----------------------------------------------- -------

  • 2000 - 1984 [gnu.org] = 16, and 16 > 10 according to my calculations.

    --
  • by Ledge Kindred ( 82988 ) on Tuesday February 01, 2000 @02:01PM (#1313803)
    "...Hurd is Object Oriented, unlike Linux, so it may be a superior system in the long run."

    I'm sorry, usually I try to stay away from straight-flamage type comments, but I can't help myself this time.

    What the hell is that statement supposed to mean? What makes an "Object Oriented" OS "better" than another that's not "Object Oriented"? What do you mean by saying the OS is "Object Oriented" anyway? By extrapolation, does this mean we can now definitively say that since C++ is "Object Oriented," it may be a superior language to C in the long run? (And if so, is Hurd written in obviously superior C++ or obviously inferior C?)

    Hurd goes about supplying services to the system processes and end-user in a different way than Linux does. They're different. That's it. End of story. If you think "the way Hurd does it" is "better" then fine, that's your opinion. Better people than you have had similar opinions; go search the 'net and find that legendary exchange between Torvalds and Tannenbaum regarding why a microkernel is superior and why Linux is doomed to failure because it's monolithic.

    This is one of the most incredibly content-free, flame-inviting statements I've seen in the main body of an article on /. for a long time.

    -=-=-=-=-

  • Is this a porpent of anti-Hurd FUD to come?

    Come on folks.... Having another free choice of OS is a GREAT THING!

    Lets not start fighting over this. That is something MS would instigate to split the Open Source community.

    Lets, at least, not do it to ourselves.

  • by Anonymous Coward on Tuesday February 01, 2000 @02:05PM (#1313806)
    Lets see...
    PC vs. Macintosh
    VHS vs. Beta
    Windows vs. OS/2
    Linux vs. Hurd

    Gee, if Linux has now become the inferior technology... it's gonna DOMINATE!!!!!!!!! YEEEEEEEESSSS!!!!!!!!! WORLD DOMINION!!!!!!!
  • History of the Hurd [gnu.org]:
    "RMS explains the relationship between the Hurd and Linux in The Hurd and Linux, where he mentions that the FSF started developing the Hurd in 1990. As of [Gnusletter, Nov. 1991], the Hurd (running on Mach) is GNU's official kernel. "

  • I don't think he was flaming. Isn't the concept of an OO-OS about ten (or more) years old?

    Actually, your post sounded more like flame than his did. Then again, maybe I misinterpreted BOTH of you!



  • I'm no Kernel Hacker, (#include ) but I recon that it would be more trouble than it would be worth.

    Sure, OOP is nice if you design it that way from the get-go, to port something would require massive rewrites. (Well, if you wanted it to be truly OOP and not just a c to c++ hack.)

    And C remain as the leader for coders currently. (Well, there might well be loads more old Fortran source in use, or a lot more people scripting in VB, but that's not OS material.) And thus it will most likely remain for a few more years.

    It might be interesting when the next generation of languages come / become common. Stuff that's made for parallell processing, with better OOP features. (C++ is pretty nasty compared with for instance Java.)

    I agree with you though, OOP is very nice once you've got the hang of it. Personally I wouldn't want to develop in C, it just feels /dirty/. ;-)
  • I can't speak to the author of this article's particular intentions when they talked about how it would be better because it was OO (if it even IS), but...

    It used to be that structured programming in the imperative paradigm was going to save us all and allow us to write the killer apps and progress in computer science. Then it was the functional/logic languages (like LISP from the functional camp and PROLOG from the logic camp) and these days, Object oriented programming is the flavor of the month. Look at the popularity of C++ on the MS platform, and the long hard push for java acceptance. Jeez, and your favorite languages like python and perl, well, if they didn't start off as having OO capabilities, they were certainly hastily added.

    It's the flavor of the month, man. In 10-15 years, I'd expect the object oriented paradigm to be as popular as other passed computer science fads. Not that it won't be useful, it's just that eventually people will stop thinking of it as the silver bullet to slay any programming problem

  • Sir, I believe you are mistaken:

    1. I'll give you 1 out of 3. Linux 0.01 is Copyright 1991 Linus Torvalds.

    2. The GNU Project was launched in 1984 to develop a complete Unix-like operating system which is free software: the GNU system.

    3. From the History of the HURD [gnu.org] page:

    RMS explains the relationship between the Hurd and Linux in The Hurd and Linux [gnu.org], where he mentions that the FSF started developing the Hurd in 1990. As of [Gnusletter, Nov. 1991], the Hurd (running on Mach) is GNU's official kernel.

    So HURD predates Linux, but Linux got usable much more quickly.
  • by BJH ( 11355 ) on Tuesday February 01, 2000 @02:10PM (#1313815)

    1: A post on the same topic was made quite a while ago (search the /. archives if you're really interested.)

    2: WTF does he mean by "Object-Oriented"?! That phrase has absolutely zero meaning when applied to the HURD.

    The real difference between the HURD and Linux is that the HURD uses a full microkernel architecture, which allows you to all sorts of cool stuff with "servers" that sit between your basic kernel and the rest of userspace.
  • by deusx ( 8442 ) on Tuesday February 01, 2000 @02:11PM (#1313816) Homepage
    I'm not sure that "Object Oriented" is the correct term to apply to the Hurd's microkernel architecture. I may be wrong.

    As for why Linux is not like Hurd, read The Torvalds / Tanenbaum debates [dartmouth.edu] or do a random search on "Linus," "Tanenbaum", and "Microkernel" [google.com]. Linus details all of the reasons why Linux is monolithic versus being broken up into micro modules. Very historic, in Linux terms.



  • PC vs. Macintosh
    VHS vs. Beta
    Windows vs. OS/2
    Linux vs. Hurd

    Gee, if Linux has now become the inferior technology... it's gonna DOMINATE!!!!!!!!!


    Boy, there is an asinine statement... the winners in all those battles won because they were the superior product.

    PCs -- cheaper and more open
    VHS -- cheaper and more open
    Windows -- actually had working device drivers
    Linux -- well see, but the word momentum comes to mind.
  • by image ( 13487 ) on Tuesday February 01, 2000 @02:12PM (#1313818) Homepage
    I've noticed a few posts asking what the advantage of rewriting the kernel in a language like C++. I don't know the answer, but in the linux kernel mailing list faq [tux.org], question 1.4 states:

    Why don't we rewrite the Linux kernel in C++?


    (ADB [Andrew D. Balsa]) Again, this has to do with practical and theoretical reasons. On the practical side, when Linux got started gcc didn't have an efficient C++ implementation, and some people would argue that even today it doesn't. Also there are many more C programmers than C++ programmers around. On theoretical grounds, examples of OS's implemented in Object Oriented languages are rare (Java-OS and Oberon System 3 come to mind), and the advantages of this approach are not quite clear cut (for OS design, that is; for GUI implementation KDE is a good example that C++ beats plain C any day).


    and


    (REW [Roger E. Wolff]) In the dark old days, in the time that most of you hadn't even heard of the word "Linux", the kernel was once modified to be compiled under g++. That lasted for a few revisions. People complained about the performance drop. It turned out that compiling a piece of C code with g++ would give you worse code. It shouldn't have made a difference, but it did. Been there, done that.



    And question 1.5:

    Why is the Linux kernel monolithic? Why don't we rewrite it as a microkernel?


    (ADB) No opinions here, just a few pointers. Linux has been implemented as a "personality" on top of a modified version of the Mach3 microkernel. This is a fully functional piece of code, known as MkLinux. The project was in part funded by Apple, and as such it was running at first on PowerPC Macs. But an x86 version is available, with fully open source code. Similarly, the Hurd (the GNU kernel) is being implemented on top of Mach3.

    There is a historical Usenet thread related to this subject, dating back from 1992, with posts from Linus, Andrew Tanenbaum, Roger Wolff, Theodore Y T'so, David Miller and others. Nice reading on a rainy afternoon. It's fascinating to see how some predictions (which seemed rather reasonable at the time) have proved wrong over
    the years (for example, that we would all be using RISC chips by 1998).


  • FYI Mach has been used *with* Unices for some time - ie, MkLinux. Could be wrong, but isn't HP-UX also microkernel based? And Minix. So using a microkernel like Mach isn't the different part. In fact, also writing an OS from the microkernel up (instead of modifying an existing OS ala Unix to run on top of it) has been done before (QNX). So, cool, perhaps. Revolutionary, no.
  • When he went to Xerox Parc in what? 76 or 77, Steve Jobs said they showed him three things:

    1 - the Windowing interface
    2 - Peer to Peer networking
    3 - Object Oriented programming

    So, that's like 23 or 24 years ago that a working model of it was definetly laid out. It'll probably being 30 years old. by the end of the this /. discuussion
  • I think that "may" be the keyword here. Many intelligent people believe that OO concepts can help build systems (OS or otherwise) that are easier to maintainable and scale.

    I think that justifies the use of "may".

    "This is one of the most incredibly content-free, flame-inviting statements I've seen in the main body of an article on /. for a long time."

    Thank goodness you stepped up the plate, Ghost Rider!
  • RMS explains the relationship between the Hurd and Linux in The Hurd and Linux, where he mentions that the FSF started developing the Hurd in 1990. As of [Gnusletter, Nov. 1991], the Hurd (running on Mach) is GNU's official kernel. "

    Well, that's at least 10 years. :)

    This Open Source thing, the artist formerly known as Free Software (yeah I know, there's a bigger difference than that), is a lot older than 1999.

  • by hey! ( 33014 ) on Tuesday February 01, 2000 @02:15PM (#1313825) Homepage Journal
    you use OO programming you MUST be more careful in your design (in order to really get any benefits OO programming gives you).

    Object oriented design gives you more dimensions in which to design -- thus more opportunities to screw up, as well as more ways to simplify projects.

    I'm curious exactly what it means for an operating system to be object oriented. Unless you put a very precise definition on it, anything can be called object oriented. It used to be people tried to say that you need three different things to qualify: encapsulation, inheritance and polymorphism.

    Encapsulation is a snap for any OS worthy of the name. For files, you can work at the file descriptor level or the file handle level. Very few applications work by setting device registers these days. Unix does a nice job of stretching the file abstraction fit over things like serial ports.

    Inheritance is a tricky one. I can see some interesting things such as abstracting out different kinds of files. Unix basically provides only a couple of different primitive file types, but it would be interesting to be able to create subclasses of random addressible files for indexed files, balanced b-trees etc. At this stage it kind of blurs the line between the operating system, utility libraries and applications.

    Is this a good thing? It beats the hell out of me. It might be pretty cool. I think, for example, that Reiser FS is interesting because it allows you to efficiently create data structures that normally would require specialied file structures using standard filesystem operations. This extends the range of simple scripting type applications. On the other hand, once you start subclassing well understood objects such as files and directories, some of the simplicity of using a well understood model such as files/directories goes out the window.
  • Just because the Linux kernel is written in C doesn't mean it's not OO. OO is a state of mind, a philosophy, you can write the OO way with assembly if you want. The Linux kernel is OO. Structures where all the members are pointers to functions look a lot like C++ or Java classes, they define strict interfaces to different components. These interfaces provide encapsulation(OO) do the actual data. The only difference is construction and destruction is done manually. Other OO tecniques may be harder to achieve in C, but still possible.
  • Thank god you wrote that. I read that story, and I immediately clicked to check if someone else had written something, or I would have written myself.

    What a bonehead statement. It reminds me of that Dilbert where the PHB tells Dilbert that they need a new SQL server, and test his boss out, Dilbert asks him "What color should we get."

    PHB replies "I think mauve has the most RAM."

    Not too far from the present situation.
  • It was the device drivers that didn't come with OS/2 that were the problem...


    +----------------------------------------------- -------

  • a convenient install for the HURD. Take a look at
    the install instructions on the Debian GNU/Hurd
    website; they're inconvenient at best.
  • Yeh, so moderate me down :-)

    IMHO Andover has palmed off some turkeys on slashdot in the guise of helping with the load. Roblimo is the prime example. I have gotten to the point of almost skipping his posts for their inflammability. It's as if Andover is trying to increase the circulation by hiring a headline writer away from the Weekly Weird News -- "World War Two Bomber Found on Moon!".

    Other of his posts have also thrown inflammatory commentary into the pot. It smells of an insecurity, as if he's afraid no one will want to read the article unless there's some exciting nonsense to get the bile up.

    --
  • So, a quick look on Altavista [altavista.com] lead me to this [www.desy.de] page, where I found out the (this is low-level.. only because I have no real idea what OOP really means) jist of OOP is that the objects have control over starting and killing themselves, and they communicate with each other. So, say, would that mean that a new device would load its driver only when you called on that device.. and kill it after a period of inaction? (instead of, say, the kernel loading the module at the user's/root's command?)

    If so, and that's the "only" advantage to HURD/the idea that HURD is OO programmed... how is that better? Is it a more efficient way of using processor time/system resources/memory?

    Or.. what's the dealio?

    (Other than that, though, I'd have to say I agree with you when you say:


    "This is one of the most incredibly content-free, flame-inviting statements I've seen in the main body of an article on /. for a long time.")

  • I can sum up my opinion on this in one statement: Who cares? Seriously, there are enough users out there that both can survive, and compete together. There's already substantial user bases for linux, freebsd, beos, etc..One more operating system out there won't change things substantially in the short run. However, hopefully in the long run all the different choices out there will establish their own strengths. These strengths can influence the other operating systems as well, causing them to clone and improve all the good features of the other OS's. We have seen this in the past with the creation of gnome and kde, which as far as I know were designed to bring the usability of windows and mac to linux.

    --

  • (C++ is pretty nasty compared with for instance Java.)

    Why are you saying this? Java is getting to be more and more like C++. For example, templates will probably be added to the next version. And the reason why these new features are being added is cuz there actually missed by C++ programmes that write Java. I had to write Java once, and I would not go back. It's buggy, it's slow and it's not PORTABLE. Write once, run everywhere, right! Write once, DEBUG everywhere is more like it.
  • by Weezul ( 52464 ) on Tuesday February 01, 2000 @02:23PM (#1313837)
    I have yet to have anyone convince me that there will be any substantial practical advantage to the HURD over Linux in the long term. And we've been waiting for it to arrive for /at least/ 10 years.

    The best Linux using lay person argument for the Hurd is a Windows vs. Linux analogy, i.e. the Hurd just lets the user screw with more stuff. The counter argument is basically "shure eating (abstraction) is importent, but eating too much will make you fat and slow."

    Personally, I suspect the Hurd (or some other microkernel) will ultimatly depose Linux by addicting people to the additional power and flexability (i.e. past experence will be the only convincing argument for the people who eventually switch). This dose not necissarily mean that the current incarnation of the Hurd has the right stuff.

    Analogy continued: Abstraction is like eating in that you must do it tomarrow too, i.e. what is a reasonable compramize between usability (abstraction) and effeciency today may be starving tomarrow (because the abstraction is necissary to process the increased quantity of information).

    Also, the Hurd has some interesting ideas, but I am concerned that it *may* be too much the "bastard child of Unix" to really provide the abstract interface people will need in the future, i.e. translators are a really neet idea, but I am unconvinced that they are the best we could be doing. I have this fealing that the real revolution would somehow involve the scripting langauges in a more fundamental way. Who knows.

    It is worth mentioning that abstract and structured dose not always imply slow, but we currently do not depend much on our compilers for opimisation since we are increasing chip speed so fast. Eventually, we will hit a limit in chip speed and need more structured langagues which allow more automatic opimisations by the compiler.

    Example: It is possible to do global analysis of functional code that you would never dream of doing to C code.

    Example: some of the fastest OSes out there are microkernels which could be writen in a protable high level language, but no portable high level langauge has the balls to preform the optimisations (higher order function, i.e. fucntions which write functions or structured self-modifing code).

    Jeff
  • by MisterE ( 147118 ) on Tuesday February 01, 2000 @02:23PM (#1313838)
    This is starting to remind me of the vi/emacs religious wars.

    The fact is that OO is a design paradigm. It isn't any "better" or "worse" than other design paradigms. In fact, it owes much to the structured software design movement from whence it was derived. It should all compile to the same code... it's a question of which one fits the particular mind set you have and the problem you are trying to solve.

  • The "winning" product must have had something going for it in order to dominate. But I think the original implication was that inferior technology kept winning.

    PC's were cheaper, but not more open. Somebody reverse engineered IBM's BIOS. I don't think anything's stopping anyone from doing a clean-room reverse-engineering of Apple ROM's. Technically speaking, PC's sucked.

    VHS may have been cheaper or had more widespread support. But ask the professional production studios and TV stations why they still use Beta...

    BTW, get a clue, the more you speak the dumber you look if you haven't got your facts straight. If you've seriously used OS/2 at all, you'll know that when Warp 3 came out, the most common devices and good quality brands were quite well supported. In fact, it's only been recently that WinNT itself has overtaken OS/2 in terms of device driver availability.

    Besides, I don't see device driver availability a drawback to Linux getting this far, OS/2 still has better support in that regard.
  • by Anonymous Coward
    The kernel was once written in C++ (from 0.99p10 to 0.99p11), but that didn't last long. You can see it in this 1.0 changelog [kernel.org]. It never really became OO or used any C++ features, because it caused too many problems.
  • ...but the 'inflamatory' comment was actually the story submitter's, not roblimo's. This is what italics mean in a story header.

    (Not that I disagree with you about roblimo's foot-in-the-mouth way of presenting stories, but this just doesn't happen to be an example of that.)

    I.
  • by Anonymous Coward
    Hey, if you like OOD, just ask Alan Cox why he won't work on an OOD kernel. I figure he is more qualified than any of the C++, QT, gotta obstract everything persons that most programmers are. My vote(not that I am Alan), is that a C++ rewirte would be a tragedy. A good system is in its overall design, not in the handicaps we have to take care of so the general IDE programmers can work with it. Applications don't need gun ho kernel programming extensions anywho.
  • by Eythain ( 120617 ) on Tuesday February 01, 2000 @02:32PM (#1313847)
    And to all you people who say that the term GNU/Linux is a total travesty of fairness on the part of the FSF, the HURD is pretty much the last component of the GNU system that is needed. Whether they choose to call that GNU/HURD or just HURD is up to them.

    Wouldn't the name be GNU? The GNU project (Gnu's Not Unix), was designed to make a complete Unix replacement (forgive me for saying so, but a good thing Linus came along, or we'd still be ten years away from the revolution (kidding!)).

    The Herd would be the long awaited final component of the FSF's GNU project, and RMS for one has certainly stressed that the kernel is merely a small part of the OS, so calling it GNU/HURD doesn't make sense from that perspective. Using the name of the kernel for the OS is the Linux way.

    -- Eythain

  • "Boy, there is an asinine statement... the winners in all those battles won because they were the superior product."

    Prove it.

    "PCs -- cheaper and more open
    VHS -- cheaper and more open
    Windows -- actually had working device drivers
    Linux -- well see, but the word momentum comes to mind.
    "

    Yes, folks, a man just tried to say that Windows was a technologically supperior product to OS/2. Feel free to start laughing until you can't breath now.

    I've used Windows, and I've used OS/2 (among other OSes). OS/2 was stabler, and had a better interface. Where it lost out was developer backing -- this is because Microsoft pushed hard with cheap VC licences early on, and because OS/2 did Win 3.1 better than Win3.1, and was used mainly as a way to keep legacy technology around. Developers for OS/2 were few and far betwen, as were the few native OS/2 apps. Windows did not win because it was supperior, anymore than homosapiens are supperior to the dinosaurs. OS/2 was already being killed by IBM before Microsoft decided Windows really was going to become 32-bit. This is because IBM is a hardware and services company, not a software company.

    You've also convieniently forgotten about some other key points in your gross oversimplification. Linux may have momentum, but so does Windows -- a great deal more, if we look at raw usage numbers. Linux vs Hurd will be a moot point, as you can run Linux under Hurd, and because Linux is good for things Hurd is not (embedded), and vice versa.

    VHS vs Beta. Hmm.. Beta had stereo and such sooner, but their format (the actual cassette format) was limited to 1 hour of SP quality, VS VHSes 2 hours. I'm sure there was marketting involved, too ;-)

    And PCs VS the Macintosh. I think Apple has done a good job of almost killing the platform in various misguided attempts to sell more of their hardware. Just like IBM and Commodore, Apple is about selling hardware -- not software. This was causing a lot of NIH syndrome, and general wrongheadedness which Steve Jobs seems to be curing at a fast rate. I wouldn't call PCs the "clear" winner, because computing is all about variety and choice -- not stagnation and single platform dominance (look at how Linux, et all, arose).
    ---
  • PC's were cheaper, but not more open.

    Surely you are not arguing that PCs were less open than c 84 - 86 Macs?

    But ask the professional production studios and TV stations why they still use Beta...

    Great, so Sony has the huge production studio and TV station market, instead of that small, insignificant home market.

    . If you've seriously used OS/2 at all, you'll know that when Warp 3 came out, the most common devices and good quality brands were quite well supported. In fact, it's only been recently that WinNT itself has overtaken OS/2 in terms of device driver availability.

    Great, by the point Warp 3 came out, the game was over.... most of the ISVs had abandoned development for the platform. Warp could have been the greatest thing since sliced bread and it wouldn't have mattered at that point.
    Oh, and I never ran OS/2, becuase it wouldn't run on my hardware. (ah, the irony!)

    Besides, I don't see device driver availability a drawback to Linux getting this far

    Yeah, all the way, to what? 5 % of the market... even OS/2 did better than that in its day...
  • by redled ( 10595 ) on Tuesday February 01, 2000 @02:36PM (#1313853)
    I found this amusing. From Debian's HURD Page [gnu.org], an informative site, by the way, we can learn:

    "According to Thomas Bushnell, BSG, the primary architect of the Hurd, ```Hurd' stands for `Hird of Unix-Replacing Daemons'. And, then, `Hird' stands for `Hurd of Interfaces Representing Depth'."

    We also find some more information on the page, like it uses the "GNU C library," not C++ as other comments suggest, and its main strong points seem to be:

    "Unlike other popular kernel software, the Hurd has an object-oriented structure that allows it to evolve without compromising its design. This structure will help the Hurd undergo major redesign and modifications without having to be entirely rewritten."

    "The Hurd interfaces are designed to allow transparent network clusters (collectives), although this feature has not yet been implemented."

    "It is possible to develop and test new Hurd kernel components without rebooting the machine (not even accidentally). Running your own kernel components doesn't interfere with other users, and so no special system privileges are required."



    --

  • Yes, folks, a man just tried to say that Windows was a technologically supperior product to OS/2. Feel free to start laughing until you can't breath now.

    I didn't see the word technical in my post anywhere. Try reading things before you post moron.

    Besides no matter how well OS/2 was designed, the fact that it its first serveral iterations it didn't recognize 80% of the hardware out there ensured the markets rightous rejection of the product.

    Windows did not win because it was supperior, anymore than homosapiens are supperior to the dinosaurs

    Huh? I'd bet on the homospiens over the dinosaur any day of the week.

    Linux may have momentum, but so does Windows -- a great deal more, if we look at raw usage numbers.

    Do you know what the word momentum means? Are you seriously saying that the rate of growth of Linux is less than that of windows over, say, the past 12 months? If so, you are both foolish and wrong.




  • by spaceorb ( 125782 ) on Tuesday February 01, 2000 @02:47PM (#1313866)
    So for we have Gnulix for GNU/Linux. What's next, Gnurd for GNU/Hurd? I hereby decree that everyone seeking to flame GNU/Hurd users refer to them as GNURDS.

  • Then it was the functional/logic languages (like LISP from the functional camp and PROLOG from the logic camp)

    I wouldn't want to deceive you but LISP is far older than structured programming, in fact there is only one programming language (and programming style) that is older than Lisp, and this is FORTRAN (and pure imperative style). Some may argue that assembly language was here before and they would be right, but the difference is that assembly isn't a language per see, it isn't compiled/interpreted as other languages but it is translated.

    Prolog OTOH is from the eighties.
  • by be-fan ( 61476 ) on Tuesday February 01, 2000 @02:49PM (#1313871)
    Object oriented means that the system uses objects to communicate with servers. And it is FAR superior to non OO systems for many reasons.
    1. It naturally fits the server/client model. The app essential "logs on" to the object, which serves as a client to the server. For example, BeOS (which is totally OO) uses several graphics objects to manage the connection to the graphics server. This connection is buffered, and several functions have to have access to the buffer. This is possible in C, but so much more elegant in C++.
    2. It allows the system API to be consolidated. Draw() called inside a window can mean different things than Draw() called in a bitmap. (Ideally, this kind of system needs the kind of object services in VisualC++ since remembering the parameters becomes harder.) Not only is this more elegant, it is easier to learn.
    3. It allows system APIs to evolve more easily over time. Ever program windows and call the DirectDrawCreateEx or CreateWindowEx functions? They are a waste of code space. You don't need thoe kind of hacks in an OO system. (The functions in IDirectDraw7 and IDirectDraw work differently, but have the same name.) I also think it may reduce code bloat over time since it is so easy to extend an old object by inheriting it in a new one. This way you shave of a lot of overlapping code.
    4. It is very condusive to multithreading. Use BeOS for 5 minutes and wonder at the way it can play 12 MP3s in reverse while copying a large file while effortlessly moving through the desktop. The system never feels like it is working hard at all. Part of it is that the OS is very clean and efficient, but its also the fact that these different jobs use different resources. When copying a file, very little processor is required. But if the OS is not heavily multithreaded, the copying will slow down the rest of the machine becase the copy thread is just waiting for the harddrive.

  • One of the (few) ways the Linux kernel has innovated*** since the Tannenbaum/Torvalds arguement is the introduction of loadable modules.

    Isn't it kinda ironic that one of the few ways Linux innovated was by getting closer to the concept of a microkernel and departing from the concept of monolithic kernel ;)
  • by DerMarlboro ( 64469 ) on Tuesday February 01, 2000 @02:52PM (#1313875)
    Kaboom! I expected a couple of know-it-alls to sneer at the suggestion that object-oriented design is better than a procedural design, but I never expected such a violent response.

    How can an OS be object-oriented?! What the hell is that supposed to mean?!

    Settle down. The poster (I'm sure) wasn't trying to say that the design methodology or language used to create a piece of software will, alone, decide its usefulness. A lot of us that write both C and C++ on a regular basis get used to the ease of conceptualizing object-oriented code, and are elated when new projects use a framework we favor. I would prefer to hack an OS written in C++. I think the code would probably much easier to get my head around. Naturally, I would rather USE whichever OS was fast, reliable, supported, etc.

    Of course, that which gets hacked most frequently tends to become fast, reliable, supported, etc, etc.

    The bottom line is this: OO vs. Anti-OO is a holy war, and the crusades will rage on. If you don't like OO, fine. Write C, or Fortran, or Cobol, or assembler for all I care. Object-oriented languages were created for a reason. Some people find advantages in them. And to those people, all other factors held constant, an OS written in C++ would clearly be an improvement.
  • IIRC, the Hurd is coded in C. Perhaps by OOD they are referring to the fact that the Mach microkernel is an object, as are all of the message servers that work with it as opposed to the monolithic design of the Linux kernel. OOD means just that... object-oriented DESIGN. This may or may not have anything at all to do with OOP, which is object-oriented PROGRAMMING (such as Java or C++).
  • Please, keep in mind that this is /not/ meant to be flamish in nature. If it so appears, I shall strive for greater subtlety in future.

    Well beyond the point that this is oldish news, and has appeared on /. already, this item is further proof that /. really, really needs some kind of sub-editor to catch these things. After all, the banner says "News for Nerds", and any organ which deseminates news has to take care with such matters to ensure that they can continue to be taken seriously.

    A phrase like "Hurd is Object Oriented, unlike Linux, so it may be a superior system in the long run." is so manifestly ambiguous and potentially contentious that it's a crying shame to see it writ upon my screen so. Time for a rethink that doesn't involve just hacking Slash, chaps.

    On-topic: HURD does indeed look tres groovy, and I'm looking forward to giving it a try. I wonder if it runs on VMWare? I guess their's only one way to find out ...
  • by elegant7x ( 142766 ) on Tuesday February 01, 2000 @02:58PM (#1313878)
    How much would it take to make Linux's kernel object oriented?

    Well, It would take as much work as making an entire new kernel. And, if you did it, it wouldn't be Linux anymore. I suppose you could call It Linux, and it could certainly be able to do all the same things as Linux. Maybe you could call it ObjectLinux or something like that. (Of course given the stupid names that the Open Source community sometimes comes up with, I wouldn't be surprised if they called it ooplix ;)

    Anyway, OO doesn't make programs faster, or give better performance, what it does, is make things easier to program, and a lot easier to update and maintain. This would probably be a good thing for an open source project, and a clear, well designed object hierarchy would make it much easier for people to start hacking around.

    On the other hand, Linux has no real lack of development support, and Kernel hacking isn't really something that the timed are going to be diving into anyway. Anything that can be done in OO can be done in a non-objective language (as long as its Turing-complete :). And I don't think a kernel is really an aria that needs to be OO (other things like the GUI, etc should probably be done that way). They still use assembly in parts of the kernel.

    Amber Yuan (--ell7)
  • OK, so you're running a Linux kernel. Which is released under the GNU General Public License. If all you are running is a kernel, then I'm impressed. Did you post your comment by whistling down your phone line?. No. I bet you had to run a shell. Which is also almost certainly released under GPL. Maybe you used cat to compose your post, but I doubt it. You probably used a text editor like emacs (originated by RMS himself) to write it. Then you probably connected to your ISP with pppd. Guess which license?

    And every single bit of your Gnu/Linux setup (if you use a decent distribution) could have been (and certainly was) built with gcc and the rest of the bin-utils package. Guess who owns the copyleft on that?

    I'm getting bored now, but please learn your history. RMS wanted to create a free OS. He chose to implement it as a UNIX rather than anything else because of the cultural heritage of the UNIX community. He would have chosen a better architecture if he could, but he made a pragmatic decision.

    Then, rather than start with a kernel, he started with the tools that make an OS usable, like a text editor (alright emacs is just a little bit more than an editor), a compiler (try writing one yourself if you're bored for a few decades), text processing utilities etc.

    Then the GNU project was almost complete, and it was good... but before they got a chance to start serious work on the gorgeous, architechturally clean and fiendish to debug Hurd kernel that was their dream, a bright lad from Finland turned up with an incomplete, but functional kernel. And they saw that it was good, if a bit hairy and flaky. But they persuaded the great Finn to release it under their licence. And GNU/Linux was born. That was in 1991. Please remember, that was a *very* long time ago. FreeBSD didn't exist. X was still closed. And the WWW probably had less pages than the Great Library of Alexandria. Ancient times indeed.

    Linux would have been a rather neat toy that had once run on a single 8086 if it weren't for the GNU project.

    Linus gave us a kernel, because he had the courage to share it.

    RMS gave us Gnu/Linux, because he is a (very hairy) visionary. What he really gave us were two critical things. Forget emacs. Forget gcc. He gave us a vision, and he gave us the GPL.

    Both the vision and the GPL have flaws, but until you can come up with any single thing as good as one of these, please remember that it IS GNU/Linux.

    I apologise for the rant.

    Ceci n'est pas un sig
  • by Anonymous Coward on Tuesday February 01, 2000 @03:04PM (#1313893)
    "But if the OS is not heavily multithreaded, the copying will slow down the rest of the machine becase [sic] the copy thread is just waiting for the harddrive."

    You're obviously not a programmer, just a user who was a little too taken in by Be's marketing department, so I'll ignore the artificial distinction between "multithreaded" and "heavily multithreaded" that you made (presumably prompted by Be marketing literature). Instead, I'll merely point out how linux handles the situation you gave. The copying process (or thread) is waiting on the disk, so it is no longer on the runqueue, and is not considered for scheduling until the disk is finished. Since it is not on the runqueue, it does not affect the processes that are.

    In laymans terms, linux, and windows NT both behave this way. Be's (exagerated by many) snappiness is a result of being well written, not simply because it is in C++, or has a lot of threads all blocked doing nothing.

    Just because Be forces programmers to use a lot of threads doesn't make it any easier for Be Programs to break up large tasks into parallel threads. Linux and NT programs can do it just as easily. the problem is, concurrent programming is difficult. On my dual CPU box running Linux has just as many processes ready to run as Be, since 99% of those threads in your "heavily threaded" apps are doing absolutely nothing, while 1 or 2 threads do 99% of the work. (percents guesstimated, feel free to test this yourself).

    BeOS is neat, but there is no reason to let rabid advocacy cloud your ability to think rationally about what people tell you. Do you think that Be's marketing department might have some vested interests, and might not tell the whole story? No... of course not, everything they say must be the whole, un-spun truth.
  • No one "reversed engineered" the PC BIOS. It was under an open patent. IBM did not hand them the design, but the legal lisence the BIOS was under was open.

    Not so. Compaq did a clean-room implementation of the PC BIOS, instead of using the published code. Several cloners who used the published code were sued by IBM (anyone remember Victor?) and faded away, and Compaq withstood its suit and thrived.


    (Disclaimer: Yes, I work for Compaq, but that all happened long, long before I got here.)
    --

  • by dsplat ( 73054 ) on Tuesday February 01, 2000 @03:09PM (#1313902)
    Linux, HURD and FreeBSD share quite a bit. A huge number of tools port between them with little effort. We welcome each other at conferences and users' groups. Etc., etc. In the end, the best open source OS will win. The best open source desktop OS will run on desktops. The other best open source desktop OS will run on the other desktops. The best open source web server OS .... It's about choice. It's about comparing them objectively for different uses. It is about learning from each one's strengths and bring that knowledge from one to another.

    Every open source OS is stronger because they are all a training ground for open source programmers. You don't have to use a single book or a single kernel's source code to learn the One True Way (tm). I'm using Linux (sorry Richard, GNU/Linux) at home right now. I may switch to FreeBSD or HURD, or dual boot. And my data and applications will come with me. Standards are common data formats and common protocols, not a specific version of a specific program or OS.

  • The reason is clear: Hemos realizes that including an inflammatory editorial at the end of a post about an operating system being superior to the Linux will increase comments by at least 100%. More page fews = More Money!

    In the S-1 filing, you'll see that Andover.net has structured the terms of Slashdot's acquisition agreement so that there is 5 to 20 millions in incentive dollars riding on the ability of Slashdot's crew doubling the number of page views in during the next two years. For information see: Andover.net's S1 filing [edgar-online.com]
  • Yes, this news has been on the GNU site for at least two weeks. And believe, it or not, the original Slashdot article [slashdot.org] even predates that. What's kind of funny is the Roblimo himself posted the original story :)

    Not to flame anyone (like Roblimo), but I just thought it was kind of amusing. As soon as I saw this story I though "hey another one! Two companies providing HURD, that's pretty good. Oh, no, wait a second..."

    Oh well :)
  • I guess this'll qualify me for a Fields Medal, then...


    (Haskell)
    factors :: Integer -> [Integer]
    factors large-prime = [large-prime 1]

    (Scheme)
    (define (factors large-prime) (values large-prime 1))

    (Perl)
    sub factors { return (shift(), 1); }
  • by Tim Behrendsen ( 89573 ) on Tuesday February 01, 2000 @03:16PM (#1313907)

    Is Hurd fully Linux compatible? Device drivers, XFree86, desktops, the whole shebang?

    Particularly device drivers. If everyone has to rewrite device drivers for Hurd, then they might as well close up shop.


    --

  • I would just like to put in my BeOS side of things here. You people talk about HURD as if it were something new and revolutionary. BeOS predates HURD by one year. Be was formed in 1990, and GNU hadn't even decided on the mach kernel until well into 1991. By 1997, BeOS had reached R3 and had been released on intel, at which time the OS was production quality. BeOS has many of the things the HURD has. It is fully mickrokernel, it has increadible interapp communication, it is not only OO in system design, but in API design, it is designed to be extended without crufting up the system. In addition, it has a fully journaled filesystem with database attributes. (not a buzzword. BeOS never fscks, and when you use the metadata feature for files and the regular expresion searching you'll never want to go back.) It has a very fast messeging system (which I believe is faster than MACH's) it has extensive API support for media, etc. Of course, BeOS did not come up with all of these. The OO idea was pioneered by Next and microkernels have been around a long time. So give credit where its due. BeOS has done most of what HURD does (except some security and abstaction stuff like coding the OS while its running) and his here now. And BeOS too does not deserve all the credit for it. All these ideas have been pioneered and infact successfully implemented before.
  • by Anonymous Coward
    It isn't coded in C++. It is coded in C. I know cuz i run it... It is Object oriented in that the pieces act like objects and every piece of the system is extensible. Before you judge the HURD have a look at it at least.
  • What do you mean by saying the OS is "Object Oriented" anyway?

    Well, it is. I haven't delved deeply into the code, but basically the Hurd operates by providing interfaces and allowing programs to implement these interfaces (similar to an abstract class/implementation relationship in C++) The reason this is a Good Thing[tm] is that it allows new filesystems to be easily developed and distributed independently of the Hurd itself.

    Where the previous poster went wrong was in implying that this is a new thing -- even Unix files are polymorphic, and there's stuff in the Linux kernel that ressembles an object system. The real benefit in the Hurd is that it's all userland, so you can, for example, write, compile, and mount a new filesystem in your home directory without needing a single special privilege.

    Daniel
  • by Anonymous Coward on Tuesday February 01, 2000 @04:00PM (#1313937)

    Why don't we rewrite the Linux kernel in C++?

    Because C++ sucks and it is evil.

    Why is the Linux kernel monolithic? Why don't we rewrite it as a microkernel?

    Because then Linus would have to admit to Tannenbaum that he was wrong.

    Hey, put that flamebait marker down... Hey, I said put it down! No!

  • Do you know what the word momentum means?

    I know! I know!

    The momentum is the partial derivative of the lagrangian with respect to the time derivative of a coordinate!

    Sorry, I'm feeling a bit silly today.

  • by Wyvern13 ( 95556 ) on Tuesday February 01, 2000 @04:13PM (#1313941)
    I think all of you who are dis-advocated the effect of OOP on the perfermance of an OS are being very one-dimensional. Sure, it all ends up as machine code one way or another, the difference is in the human factor, the pyschological element of programming. Programming is, after all, just a metaphor for computer functionality. The GNU website says it best...

    it's built to survive Unlike other popular kernel software, the Hurd has an object-oriented structure that allows it to evolve without compromising its design. This structure will help the Hurd undergo major redesign and modifications without having to be entirely rewritten.

    In the initial version the methodology doesn't matter much, but as the OS matures, methodology beings to play a larger role. It's a lot easier to revise an OOP system, maintaining elegance and performance, than it is to revise a conventional structued program. Remember, the article says "better in the long run", which it is. In programming, you must always remember the human element, some might say it's the most important of all.

  • Woah... How is the Linux architecture out of date? It works... what else matters? So what if it's not OO. I don't care, I want something that works.

    Furthermore, what do you mean Linux isn't OO? What makes an OS OO? Object Oriented has nothing to do with the language something is written in, rather how it is written. the linux kernel is quite well layed out.

    As for MS Kicking ass, that has nothing to do with the OO TCP/IP stacks, but how the stacks are written. An OO environment will almost always slow an app down. The reason for this is that for an enviroment to suppor the basic tenants of being an OO language mean that many decisions have to be delayed until runtime. C++ doesn't really take this all that far (hence people often refute it as a real OO lanugage), but take a look at Objective-C or SmallTalk to see what this really means. The answer is, decreased performance (due to the complex runtime environment) but increased maintanence and extensibility.

    Abandon C? HA! C++ has it's usage, mainly in that it's OO enough to make GUI App development easier. But you still can't reach the performance of straight C with C++. I'd suggest, however, that you one day check out an OpenStep environment and see what a real OO language can do. Check out Interface Builder on MacOSX or NeXTStep, that could not have been written in as crappy a language as C++. C++ was created as a better C, and I think it's best left at that.

    Last thing, I don't want to combat MS. I could care less. The "Us vs Microsoft" attitude is really getting old.
  • Basically, you load the Mach microkernel, then you load the OSF/1 unix server

    If there's an "OSF/1 unix server" on Digital UNIX, it appears to be a chunk of kernel-mode code (and I'm not even certain that it does put that stuff into a separate server process), so DU/Tru64U doesn't appear to be all that microkernelish.

  • Any GNOME super-hero can tell me how GNOME and HURD relate to each other?

    I'm not a "GNOME super-hero", but the way GNOME and the Hurd relate to one another is that:

    • the Hurd contains the part of the implementation of a UNIX API that needs to run in some form of "privileged" mode (kernel mode, or privileged servers running in user mode);
    • GNU libc (and perhaps some other libraries) contain(s) the rest of the "core UNIX" API;
    • the X libraries run atop that "core API" just as they do on other OSes providing that API (Linux distributions, BSDs, Solaris, Irix, Digital UNIX, HP-UX, AIX, blah blah blah);
    • GTK+ and GLib run atop the X library and the "core API", just as on other OSes providing that API;
    • GNOME runs atop all that stuff, just as on other OSes providing that API.

    I.e., GNOME relates to the Hurd the same way it relates to the kernel of other UNIX-flavored systems; with the possible exception of the small amount of stuff that needs to worry about which particular UNIX-flavored OS it's using, GNOME neither knows nor cares that the Hurd is running down at the bottom.

  • Mountain View, CA., September 22 - HURDOne, Inc., a leading-edge developer of HURD software, products and services, filed to raise $24 Million in an initial public offering, according to a Securities and Exchange Commission filing.

    The Mountain View, California-based Company offers online products, tools, news, and services for the HURD operating system and other "open source" communities, at its website http://www.hurdone.net/.

    Under its open source model, anyone may contribute to the software coding.

    The Company's principal product, HURDOne OS, provides a wide variety of server functions, including setting up a web, e-mail, file or print server, as well as using the computer as a general purpose desktop workstation to perform virtually any computer function. HURDOne OS will be available in English, Chinese, Japanese, German, Spanish and French.

    HURDOne plans to sell 3 million shares in the IPO and will have approximately 9.2 million shares outstanding once the sale is completed, according to the filing.

    The company was founded and is run by its President, Dr. One L. Inux, Jr., who has worked in senior engineering and technology positions at Hughes Aircraft Co., Teledyne Systems, Co., and California Institute of Technology, Jet Propulsion Laboratory. Dr. Inux was also Chief of Artificial Intelligence Branch, NASA Ames Research Center and organized Lockheed's Artificial Intelligence Center. He was the former founder and CEO of Alantic Macroelectronics and WebCIS.

    HURDOne applied to sell its shares on NASDAQ under the symbol "HURD".

    The company will be in its "Quiet Period" during SEC review of its filing, which is available on EDGAR.


    About HURDOne

    Our company provides world-class quality HURD software targeted to the server, workstation and home environments. It is distinguished by the unchallenged availability of applications and platform support, ease of installation and use, and technical support. The software is characterized by stability, security and usability. HURDOne expects to become the highest rated supplier of HURD solutions based on packaging, support, and capability worldwide.
  • Perhaps he meant that the Hurd is "object oriented" like NT is "object oriented." I.e., a componentized, layered system that uses standard interfaces and hides implementation details as much as possible.

    Of course, he could have stated it better.
  • I believe that the term GNU/Linux _IS_ "a total travesty of fairness" on the part of the Free Software Foundation. Linus pioneered a bazaar style development model that the FSF could still learn a thing or two from -- GNU projects are notorious for their difficulty of developer entrance (witness XEmacs vs. GNU Emacs, or the fact that Hurd is barely to 0.3 after ten years of development).

    Aside from that, the argument that Linux uses many GNU utilities and therefore should be called GNU/Linux just doesn't hold water. Perhaps I should say my Linux workstation is OpenGroup/Linux because I use X, or BSD/Linux because I use some BSD-derived system utilities. If RMS wants GNU/Linux, what argument does he have against names like GNU/BSD/OpenGroup/Linux? I think this pretty much reduces his arguments to absurdity.

    Don't get me wrong -- I greatly respect RMS's coding abilities and his vision for free software. It's just that his ideas don't _always_ match with reality.. :)

  • by Tim Behrendsen ( 89573 ) on Tuesday February 01, 2000 @04:59PM (#1313968)

    People DO say that about Linux, and it keeps many of them from it. Compatibility was what killed OS/2 (not marketing, by the way).

    A new platform rises when it solves problems that the old platform does not, and it does not make the transition too painful. Linux has risen on the server side because it provides solutions better than NT in a lot of areas. You'll note that Linux so far is a miserable failure for the client-side desktop, because of the lack of applications (read: compatibility). OS/2 was far and away technically superior to Win 3.1, but IBM couldn't give it away (they sold machines with both, and people deleted it in favor of 3.1). Again, compatibility rules.

    Hurd might have some advantages, but if they aren't huge advantages, not many is going to spend the effort to port applications, device drivers, etc away from Linux when Linux works well enough. Or at the very least, Hurd becomes a second class citizen waiting for a vendor to take pity (Sound familiar OS/2 and Mac users?)

    In fact, this is a good lesson for many Linux advocates. On the one hand, advocates decry people choosing the "technically inferior" Windows platform, yet we see the same processes at work that keep people on the "technically inferior" Linux rather than switching to Hurd.

    It's all about the applications and getting work done. The OS is just not that big a factor in how people choose platforms.

    If the Hurd guys are smart, the will put in a Linux compability layer everywhere that's required and makes porting a simple matter of copying binaries.


    --

  • will software need to be recompiled to run on the HURD, or is it binary compatible with Linux?

    I think its a bit premature to say the HURD is 'theoretically better than Linux' i mean, shit, you could say that Windows NT is 'theoretically better than Linux', but i know which OS i'd rather run.

  • And let the pissing contest begin...

    C++ templates are Generic Programming.

    C++ templates are expanded at compile-time to produce classes.

    C++ templates, as I understand them, can be used for ANYTHING, not just the types of data held in containers. GJ (Generic Java) only applies to types.

    GJ isn't "expanded;" it's reduced. Once the type check has been done by the compiler, a cast is inserted in place of the generic type. This removes the explosive expansion which results from templates. One of the reasons this works is that everything in Java is a subtype of Object. You can't do that in C++, since it isn't a singly rooted hierarchy.

    If you'd like to do some actual research to go with your opinion, look at http://www.cs.bell-labs.com/who/wadler/pizza/gj/in dex.html

    Ok, once again, how big are the programs you are writing?

    Well, I've built a messaging/app-server in pure Java, and a GUI that sat on top of it (very cool; each user had multiple "rooms" with active objects whose code resided on the server. You could go into other rooms, if you had permission, interact with the objects in there. All of the objects could communicate via a simple message passing API. It was great.). Total size? Couldn't tell you off the top of my head (it's been 18 months since I last worked on it), but probably in the 200KLOC range for the server and the client.

    I just finished work as part of a team doing EJB development. There's probably about 100KLOC in that project. I've built client-side tools to work with the EJB server. That's probably another 10KLOC of Java.

    I also write small java projects on the side for my own personal happiness. If you're a fantasy baseball fan, I've got just the toy for you...

    As a grad school research project, I hacked the 1.0.2 JVM on Solaris to modify how it downloads class files to applets. I used to be able to recite the class file format and the size of the opcodes in my sleep.

    If you were to clearly explain to me why C++ is NOT becoming Java, I could probably explain to you why they are the same.

    You probably meant why Java is NOT becoming C++. That's the question I'll answer.

    The short answer is: someone thought about Java before they released it (AWT doesn't count; it was a hack written in a weekend when Java was retargeted to icky applets). Java has garbage collection. Java has a singly rooted hierarchy and single inheritance. Interfaces rock for separating implementation from, well, interface. Java doesn't have operator overloading, and never will. Java has built-in threading support. Java has built-in weak reference support. Java has excellent dynamic object code loading support. Since the language was built around these concepts (weak references were hidden in the original JDK releases, but they were there), they fit together nearly seemlessly.

    The Java language spec added ONE new feature since its original release: inner classes. They are great. Anonoymous inner classes are lambda expressions, something I missed dearly from Lisp.

    If generic programming comes to Java, it is going to work right, and not have nine billion different implementors each with their own incompatible ideas.

    For anything that you code in Java, I can reproduce easily in C++ using the same sematics.

    Well, duh. They're both Turing-complete languages. Anything that can be coded in Java can (in theory) be done on a Commodore 64 in CBM BASIC or any other Turing Machine equivalent. Doesn't mean I'd want to do that.

    -jon

  • "I didn't see the word technical in my post anywhere."

    Ahh, but you did say, "the winners in all those battles won because they were the superior product." If Windows wasn't technologically supperior (the word is different from technically), how was it supperior? The mad Gates-fu action? The strong-armed OEM deals? The product was in no way supperior. You didn't even backup your claim. I just picked one way in which it wasn't supperior.

    (hurt feelings mode on)
    "Try reading things before you post moron."

    I don't post moron, I post messages. Perhaps before you enter a sig like "Speak friend and enter" and say things like this, you should research a little grammatical rule -- commas used around names when used in direct address. Besides the fact you are attacking me for pointing out genuine flaws in an article, rather than addressing my points. Some friend you are. (hurt feelings mode off)

    "Besides no matter how well OS/2 was designed, the fact that it its first serveral iterations it didn't recognize 80% of the hardware out there ensured the markets rightous rejection of the product."

    OS/2 Warp 3 and 4 both recognised everything I had on my machine, and gave me the same amount of "driver support" as Windows 95 did back when I ran it (this was all way back in 1995/6, though, before my Linux days). You might've had some bad experiences, but don't let that stop you from looking at things objectively. The BeOS has the same "driver share" as OS/2 did in its day, comparatively.

    "Huh? I'd bet on the homospiens over the dinosaur any day of the week."

    Maybe. But I said Windows didn't win because it was supperior. Did homosapiens win because they were supperior? No, the dinosaurs were already dead, just like OS/2 was already sinking to its grave well before Windows became the "standard" for OEMs, etc.

    "Do you know what the word momentum means? Are you seriously saying that the rate of growth of Linux is less than that of windows over, say, the past 12 months? If so, you are both foolish and wrong."

    As others have noted, you have confused momentum with acceleration. Linux does not have the weight of numbers behind it right now, but it is growing. Linux, unlike Windows, does offer supperior technology and flexibility. Once we have the momentum of hundreds of millions of users, Linux will have truly reached its potential.

    And don't be so rude :-)
    ---
  • Loyalty is a HUGE topic in the GNU/LINUX world. Honostly, how many people that use this system aren't loyalists and purists? But should loyalty bring closed mindedness (I'm not sure if that's a word)? I think not. Before I get into why HURD is a Good Thing(tm), you first must ask yourself this, what is it about Linux that you are mostly loyal to? Personally, I'd have to say that freedom is the first thing, followed by all my gnu tools (which most unices have, but gnu's stuff is more polished and featureful IMHO), also would be GNOME and E (not to say one that any other desktop combo is better than the next, I'm just a sucker for eye-candy), and my linux apps. I'm sure that most people would agree here. Now, for all of you HURD bashers, how could HURD possibly take these things away from you? It is licensed the same way as linux (GPL, duh), so freedom is still there. Obviously, gcc will work with it, because they are both part of gnu, making all of your apps available. We still haven't seen any disadvantages yet. HURD will merely add another choice of kernel and survival of the fittest will eventually favor HURD or Linux, maybe both? Though I highly doubt that. If HURD turns out to be better than Linux, all the Linux kernel hackers will more or less start hacking HURD and all will be well. This can only BENEFIT us as a community. Linux is dominant now, HURD may be later, this leads to quality computing, choice, freedom, evolution, and choice. Where's the downfall? -- "We've got to get up, we've got to go, we've got to be one voice!" ~Pennywise -Bob
  • It's not OO because Linus Torvalds is one of the relative few who hasn't been suckered into creating bloated

    C++ code need not be much more bloated than C code. It may have a little more overhead, but current C++ compilers are getting close to the efficiency of C compilers. It really isn't enough to make a difference.

    inefficient
    How so? If you can do it in C, you can do it in C++ just as well. OO != inefficient.

    buggy
    ???
    C++ is much more organized than C, making bugs far less likely. It's not like C++ is some brand new language that hasn't had time to be tested. It's been around for quite some time.

    hard-to-maintain
    Please at least read a book on C++/OOP before knocking it. You've convinced me with this one that you really don't know much about C++ or OOP in general. C++ is far easier to maintain than C. It's whole structure is designed for this very purpose, and it succeeds at this quite well.

    (Can you tell I feel strongly about this?)
    Yes, I just wish you'd reseach a subject before forming an opinion on it. OOP is not the answer to everything, but it does not suffer from the things you claim it does. Java might, but that's for other resons entirely. If you'd work with C++ for any length of time on any large scale project, you might learn to appreciate it.
  • 'sides, they probably port relatively easily from Linux

    In fact, that's exactly how Hurd (or rather, GNUMach) got a lot of its device drivers.

    Daniel
  • In theory, software that doesn't do direct syscalls should work correctly when run on the Hurd -- in fact, you should get exactly the same results from compiling it on the Hurd as from compiling it on Linux. The reason is that most system-dependent stuff is actually accessed via a dynamically-linked libc, which can provide the same interface on Linux and on the Hurd (in fact, on the Hurd, a lot of traditionally "kernely" things such as signals are done entirely in libc) Non-free software like netscape and quake wouldn't work, but by the time the Hurd is working, I suspect that Mozilla, Mnemonic, and Crystal Space will all have finally released a usable version (although it'll probably be a close shave for one or two of those :) )

    In practice -- well, the current Debian GNU/Hurd snapshots use a slightly different libc than Debian GNU/Linux, and all in all I don't think anyone's bothered to try it yet. If it could be done, it could (in theory) save a lot of compiling and diskspace..but compiling on the Hurd is also useful to test it (compilation is pretty intensive) and to verify that programs really are compilable on the Hurd; even if Debian doesn't precompile things for it, a user might want to.

    One other note: you could probably also write a syscall emulator. I'm not sure which level it'd go at, though (ie, gnumach..a Hurd server, .. ?)

    Daniel
  • Sorry, I have to take issue with one thing..

    Anonoymous inner classes are lambda expressions, something I missed dearly from Lisp.

    This is true, but they're lambda expressions the same way that you can make
    a real lambda expression in Python (I don't mean the builtin lambda but
    something that actually can create closures), ie, a huge nuisance to type and
    a distraction to the control flow. Personally, I find it to be almost as bad
    as having to type the class outside the function (but not quite)

    Daniel
  • How many people...are not loyalists and purists?
    Me. That is, I can be a purist from time to time but I am certainly not a loyalist of /Linux/.

    While I agree with your point, I think that loyalty to an operating system is an immature position; the only things worth being loyal to are ideas.

    Daniel
  • You might not call it that, but you'd almost certainly call it (say) a wood and metal birdhouse.

    Personally, I don't mind when people say "Linux", since "GNU/Linux" is a mouthful for the same reason that "wood and metal birdhouse" is, but

    (ok, not a perfect analogy, but you get the point I hope? If not, please rm -f /lib/libc.* /bin/{bash,sh,ls,mv,cp,ln,sed} /usr/bin/{tar,gzip} and then tell me how well your system is running ;-) )

    Daniel
  • You might not call it that, but you'd almost certainly call it (say) a wood and metal birdhouse.

    Personally, I don't mind when people say "Linux", since "GNU/Linux" is a mouthful for the same reason that "wood and metal birdhouse" is, but the full name *is* GNU/Linux, or even more properly the distribution: Debian GNU/Linux.

    (ok, not a perfect analogy, but you get the point I hope? If not, please rm -f /lib/libc.* /bin/{bash,sh,ls,mv,cp,ln,sed} /usr/bin/{tar,gzip} and then tell me how well your system is running ;-) )

    Daniel
  • by werdna ( 39029 ) on Tuesday February 01, 2000 @07:18PM (#1314020) Journal
    that as long as a software project is intelligently planned and developed, it really doesn't NEED to be OO. OO programming has some advantages to development (especially when it comes to designing a UI) but its main advantage is that if you use OO programming you MUST be more careful in your design (in order to really get any benefits OO programming gives you).

    That's what I used to think as well. However, a substantial contingent of the OO community argues violently against extended design at the outset of a project, relying instead upon an OO programming technique, refactoring, to "adjust" OO designs over time to facilitate change and reuse for additional functionality not contemplated in the prototype.

    The theory is to build VERY EARLY ON "the simplest thing that could possibly work," exploiting refactoring and agressive (most tests written before most code) regression testing to evolve the prototype to a superior design over time. Fowler's book, "Refactoring" and Kent Beck's "Extreme Programming" really opened my eyes to a new way to thinking about code.

    Having experimented with XP techniques on recent projects, albeit in the small, I have discovered that there is much more truth than hype in Beck's writing. While I haven't "gone to the dark side" completely yet, I now recognize that there are in fact some truly fundamental, and exciting, differences between OOPing and traditional hacking.

    On one point I will agree, however: good ooping requires discipline to attain its benefits, perhaps more discipline than can be imposed on a large decentralized project such as an Open Source OS. You can write groty code, of course, in any programming languauge.
  • Maybe so, but if everyone thought that way, there wouldn't be any new operating systems and we'd be stuck with backwards-compatible pieces of garbage.

    Exactly correct: How did Windows get to be the most popular OS in the world? DOS compatibility, baby. They gave people an upgrade path, and even now, people want their DOS apps to work. Heck, I even run a DOS app every now and then (the "Sherlock" game rules! The newer Windows version blows). In fact, my father-in-law still uses a DOS program to store his golf games.

    Say what you want about Microsoft, but that's the one thing they have understood better than anyone else: "he who is most compatible wins". Does it prevent a lot of progress taking place? You bet; that's probably the #1 reason Windows is as unstable as it is. They can't implement strong memory protection a la NT without breaking a lot of applications. But guess what? Consumers would rather have their application investment continue to work than not have a "backwards-compatible piece of garbage".


    --

  • As for drivers, well, I've never cared all that much about multimaedia stuff. As long as I have a semiadequate video card (my s3_virge is fine for me) and a semiadequate sound card, I'm happy enough. I suspect that most of the people using HURD won't be terribly interested in this sort of thing either.

    Come on. How about network cards? Wanna run HURD on your laptop (Those are solid wacky drivers). PCMCIA support? How about wireless comms? How about USB support? Scanners? Cameras?ISDN? Tape drives? Good god, how about 3D card support for decent game of Quake? Or of lesser importance, CAD?

    Don't make the mistake of judging the needs of others by your own (rather boring) needs (I mean, a Virge? Yuck).


    --

  • What, what? A mention on /. of a programming language that's not descended from von Neumann ideas? Who knew. =)

    Out of curiosity, have any projects of reasonable size been implemented in a modern FP such as Haskell? (I ask since I'm taking CS492 right now which is a seminar class on Haskell, and it's a total blast.)

    For those who are unfamiliar with Haskell and other functional programming languages, FPs constitute a different way of thinking about programming and system architecture; Haskell features strong typing, higher order functions (functions can be passed to and returned from other functions), rank-2 polymorphism, lazy evaluation, curried functions (pass unsufficient #s of args and get back a function that works with those args preset and takes the other missing args) and other such COOL STUFF (tm). And as it's built on functions, there is no internal state, no variables, pointers, etc. which is hard to get used to but eliminates most of the nasty crash-prone parts of C/C++/etc.

    Here's something nasty to twist your mind around for C programmers: quicksort in four lines of code that works on any object supporting inequality operators.

    qsort [] = []
    qsort (x:xs) = (qsort ls) ++ [x] ++ (qsort rs) where
    ls = [y | y = x]

    This reads: Quicksorting an empty list returns an empty list. Otherwise, take everything after the first element (xs), create two lists of everything above and below the first, and concatenate them together with the first element in the main list. This also features another neeto part of Haskell, pattern matching on function parameters to determine behavior (for both speed enhancements and for design reasons).
  • by pb ( 1020 ) on Tuesday February 01, 2000 @10:12PM (#1314048)
    Your post was wonderful and fair until you mentioned the incredibly stupid "GNU/[blah blah blah]" flamewar.

    First, some "facts" about Linux. Linus didn't even want to call it Linux, or GNU/Linux, or Bob, or anything. In his mind, it was probably originally called "386 protected mode assembly tutorial", and eventually grew into an OS kernel. He did mention that he wanted to call it FreaX, or something silly like that. The guy on the ftp archive said "Nah, that's a dumb name. His name is Linus, I'll call it Linux." This is all paraphrasing what I remember about the subject -- feel free to post more detailed accounts of this story if you wish.

    Second, once development really got going in C, and Linus managed to get gcc running under Linux, he was grateful enough that he GPL'ed the Linux kernel. Linux is essentially an excellent GPL'ed, Unix-looking OS kernel, which can be used to fulfill the final bit of the GNU project. Calling Linux GNU/Linux makes about as much sense as calling GNU GNU/Linux.

    Linux can also be used with many other free and commercial packages, but is not dependent on them, as it is an OS kernel. If you wanted to, you could probably run iBCS, and use FreeBSD or SCO or Solaris's system tools. Most people would rather just compile the GNU ones, but this is a distribution issue, not a kernel issue. Even so, we don't name the kernel or the distribution by the name of the packages within. Otherwise, the full and accurate name of my modified Redhat 6.0 distribution would consist of about 494 separate names, not counting anything I compiled myself. That's a long name, and unless you're writing the new Sumerian Unix epic poem, I don't recommend doing so.

    Finally, if you're stupid or arrogant enough to call the OS kernel GNU/Linux, or the distribution "a GNU/Linux system", why stop there? How about "GNU/RedHat 6.1", even though the GNU project has no real corporate association with RedHat? (they didn't merge or anything, guys)

    The GPL cuts both ways. We can use your software, and we'll give you your source, but the GPL doesn't include any "advertising clause". Is this what you want, RMS? The good old BSD license provisions to protect you?

    How about a new license, the JPL, for "Jealous Public License", requiring any program or collection of programs to clearly state all the programs or projects involved in its name, regardless of how stupid, inane, or non-marketable the resulting name sounds? If using the GPL for your software isn't enough for you, does that sound inane enough for you, RMS? (*please* don't take this seriously. I *beg* of you.)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • If you want portability, stick to jdk 1.1.8. It has very few bugs and interoperability issues left and is available on all major platforms in a workable form. Of course if you want to use the 1.2 API (and why wouldn't you want that?). You'll find that linux support is a bit flaky right now.

    I can't believe that you're still talking about jdk 1.0.2. Where have you been the past five years? When people discuss linux they get flamed if they do not take the latest unstable kernel build into account and when we discuss Java the oldest available version suddenly becomes the norm. Sounds like a double standard to me.

    BTW. have you seen the jazilla project. Remarkably stable for a 0.2 version and remarkably functional considering they have not yet started to redevelop the renderer. Also remarkable progress considering there are only 10 registered developers on the project page.

  • They can't implement strong memory protection a la NT without breaking a lot of applications


    Sorry? NT has just as strong a memory protection as Unix.
    What you're saying is just as 'ignorant' as when Linus said that a stray pointer in windows would crash windows. Windows/NT is NOT MacOS/AmigaOS etc.
    You can provide full memory protection but yet have DOS compatability (eg. VMWare). NT does this in a similar way (DOS is supported under emulation).
  • Yes, but I was defending NT's memory protection :).
    I got the impression that it was thought NT didn't have very good memory protection so as to keep DOS compatability.

    There's also nothing stopping Microsoft from emulating standard hardware and marshalling it to the HAL (like VMWare does) except that it's not considered as important. I don't think Microsoft do the DOS emulation for NT tho, they got the guys to do SoftPC to do it.
  • Didn't we already have this discussion a while ago? Doesn't this page [slashdot.org] ring a bell? (I even won quite a few karma points at the time. Maybe I should cut'n'paste my comments of the time to gain some more; but I won't 'coz I'm too honest.)

    Anyway, this explains why we're suddenly seing an unusual amount of traffic on the help-hurd mailing list.

  • Thanks pseudonu; this clears up some confusion for me.
  • How many of those page views are people reloading trying to get 'First Post'?
  • ...GNU/Linux...

    ...Hurd is Object Oriented, unlike Linux, so it may be a superior system in the long run...

    (Score: -1, Flamebait, Troll)

  • "True" Microkernel OSs have a problem that's generally swept under the rug of theory: task switches cost performance. Dozens, if not hundreds, of clock cycles per switch, and that's not counting the costs of TLB and cache thrashing. The only way to offset this is by doing fewer task switches- i.e. by "monolithizing" the kernel into larger and larger chunks. An example of this happening is when NT4 brought the graphics primitives into the kernel for performance reasons. In extremis you get the AmigaOS- which was a "true" microkernel design and got good performance, but at the cost of having no memory protection at all. Task switches became little more expensive than procedure calls.

    "True" monolithic kernels also have problems- mainly lack of configurability, the need to recompile the entire kernel to add or remove a driver or reconfigure anything, etc. To overcome these limitations, the kernels need to undergo "microkernalization"- an example of this happening is Linux getting loadable modules.

    So the best operating systems- those who strike the best balance between configurability and performance- are the "mixed breed" OSs, either monolithic kernels with microkernel-like features, or microkernels with monolithic-like features. At this point, the difference between "microkernels" and "monolithic kernels" is the difference between Coke and Pepsi.

    Brian
  • Linus originally called his operating system "FREIX". He made it because DOS and Windows were not powerful or stable enough for him and UNIX was too expensive.

    Chris Hagar
  • Re. Haskell: Well, kinda. Haskell uses a static typing system, wherein you have to declare symbol types as well as definitions. The first line can be read as "'factors' is of type 'mapping (function) from Integer to list of Integers'". The second line defines the lambda-expression to which the symbol 'factors' is bound; in fact, it's syntactic sugar for "factors = \large-prime -> [large-prime 1]" (read "\" as "lambda"). It's interesting to note that Haskell implements lists as monads, and that "[large-prime 1]" is itself syntactic sugar for "large-prime:(1:[])" (wherein ":" is the cons operator and "[]" is the empty list).

    Re. Scheme: 'values' is just a special form for returning multiple values. No biggie.

    Re. Perl: when a Perl subroutine is called, its arguments are pushed into the argument stack, which is accessible through the array '@_' (array variables in Perl start with '@'). shift is a function that treats an array as a FIFO queue, shifting the first element off and returning its value. Perl has many "shortcut" versions of functions, and this is one; when called without any arguments, shift is applied to the array '@_'.

    Happy hacking!
  • No, of course you wouldn't call it GNU/NT. It's not the fact that linux uses a lot of bundled GNU utilities or that GNU is all over the place in linux. That's why I added the comment in my post, "If you look at linux in terms of the GNU framework".

    The GNU project was about creating an entire UNIX like system, which consists of tools, a compiler, and all the rest of that jazz. The kernel was the last piece they needed, and when linux came along as a GPL'd kernel, sure it was an independant project from GNU, but the people at GNU looked at it as the last piece of the puzzle just sliding into place, fitting perfectly with all the other pieces.

    So why not GNU/NT? Because NT isn't GPL'd, it isn't UNIX like, you could distribute it with GPL'd utilities and use it that way, but that was never the point of NT, and because NT is about corporate profits, not about freedom. That's why you wouldn't call it GNU/NT no matter how many GPL'd utilities you used with it.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...