Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

The New Linux Myth Dispeller 155

TillmanJ writes: "Just a quick note to let everyone know that the New Linux Myth Dispeller is online at http://www.eruditum.org/lin ux/myths/myth-dispeller.html It is not, however, ready for prime-time, but is usable. If anyone has anything to add/correct/bitch about, send me some email. In patricular, I would like to work with some non-english-speaking folks to translate it into whatever langs we can" Useful for clearing up the misconceptions of PHBs and other folks.
This discussion has been archived. No new comments can be posted.

The NEW Linux Myth Dispeller

Comments Filter:
  • How does 'shipped with 36 thousand known bugs' = rock solid?

    Just curious...
    NecroPuppy
  • by Junks Jerzey ( 54586 ) on Saturday August 19, 2000 @09:12AM (#842777)
    I think this is trying too hard. The answers sound good in principle, but they don't tell the whole story. In fact, I think the answer to many of the questions should be "Well, that's actually kinda true," followed by an explantion of why it is so and how to better deal with that issue in the future.
  • I don't know. The whole thing was not written very well. It could easily be interpreted in different ways.
  • Why not see if the average person can get past a Linux installation?

    Well, because that's not what's hard about Linux. If they can't install Linux, I suspect they can't cut-and-paste in a word processor either. The initial install with distributions like Red Hat, Mandrake, or Caldera is a piece of cake, and yes, anyone can do it. It's no harder than installing other OSes, because it's automated.

    In my experience, the hair loss begins right after the initial install, when the user starts installing additional packages that didn't come with the distribution. RPMs, which are allegedly supposed to be easier than source tarballs, are a major pain in the ass at this point. Damn those dependencies and version conflicts!

    That is what I want to see Joe Schmoe do, and I bet is that currently, Joe will have a lot of trouble. I know I did, and I'm a computer dude. I couldn't even install GLX in order to play a game until I gave up on RPMs and fell back to ./configure, make, make install.


    ---
  • I wasn't going to write more, but I will.. this document is appalling!

    It claims that GPF (ie. segfault) can only be caused by hardware failure. GPF means segfault in the kernel. This means the kernel had bad code. Capiche?

    Now, it calls the linux kernel "small".
    375,056 KERNEL32.DLL
    715260 Aug 1 23:54 /boot/vmlinuz-2.0.36

    twice the size of NT kernel. Yay for small.

    Windows is bloated, because a *full development environment* is five times the disk space of a *text editor* ? Give me a break

    It lauds Linux as being POSIX compliant, but windows 2000 is POSIX compliant too. (And Win NT with Interix installed).

    The other points this FAQ makes (re. security, history, y2k compliance etc.) are all obvious and only worth bothering with if you are the sort who HAS TO refute a lamer's argument, rather than just call them lame and ignore them.

  • Shalom Mr Anonymous Coward. I've installed Windows, OS/2, Linux (several distros). BTW : Has anyone else noticed how these white supremicists are starting to look just a bit inbred.
  • There is NO WAY your average user could get past Windows installation. I've installed both Windows and Red Hat on PC's and Red hat is marginally easier. Neither are ideal for novice users.

    Installation is not the acid test for the desktop, and Linux can be pre installed these days.
  • Well at over eighty bucks a pop retail there's certainly money to be made from the Red Hat boxed distro.

    For bundling purposes though how much of a discount does Dell give you for Linux being loaded? None. So which box is more profitable, the Linux system or the Windows? I don't know, the support contract may cost and Windows is peanuts for Dell but there's at least more to the economics than you're suggesting.
  • Windows 95 is NOT built on top of DOS. It does contain a lot of cut-and pase Win16 code, but does not run on top of DOS.

    However, how many Win16 apps does one run? secondly, you don't seem to quite understand how Windows multi-tasking works. Win16 apps are run inside a virtual machine, which is in itself a Win32 applications. The machine preemptivly multi-tasks all Win32 applications. The Win16 VM then cooperativly multi-tasks all Win16 applications. Thus the illusion doesn't just "vanish" when you run a 16bit application. All your 32 bit applications continue to be preemptivly multi-tasked, it's just that your 16 bit applications are cooperativly multi-tasked (against each other.) Thus, an Win32 application cannot hog the processor, and if a Win16 application does, it won't hog the machine, just the virtual machine. Since the virtual machine is a 32 bit application, it can be preempted so the result is that a Win16 application can only hog the proc from OTHER Win16 applications, not Win32 applications.
  • No.

    Dispelling Myths is not the same as casting FUD about another product. If you defend a product honestly or point out it's merits it ain't FUD.

    If on the other hand you were to spread half truths about a competitive product, and imply bad things will happen when you use a competing product that would be spreading FUD.
  • The average person probably can't install Windows. Luckily for them, the average person only needs to use it.
  • Let's get this straight, teling people Linux is good at some stuff and correcting some errors is not FUD.

    FUD is when you spread misrepresentations to cast doubt on a competing product. In general I don't think this is widely done by the Linux community.

    I've seen FUD get spread by Microsoft in other areas, it ain't pretty and it's obvious what's going on to the well informed.

    If someone did this in the Linux community there would be a chorus of objections correcting the FUD. Heck just look at what's happened in this thread. Honest comparrisons are not FUD.
  • you have to have GNOME+KDE(both so you have full compatibility)+Mozilla+X+kernel. Not to mention the multiple versions of glibc and all the additional (often redundant) libraries all the apps use.

    I've never, ever had to have multiple glibc versions running on a machine. Also, you only need the base libraries for GNOME and KDE to actually run the apps - most of the stuff is only useful if you're using the full desktop.

    In terms of memory usage, Linux blows NT4 out of the water (a bad thing) and is quite close to Windows 2000's bloat.

    Not in my experience. Obviously, this kind of article is just going to lead to a flame fest all around, but I've run RH 6.1 on a 25 Mhz 486 with 16 megs of RAM - WITH X+KDE. And it wasn't noticalbly slower than the Win 3.1 it replaced. Try running NT4 on something like that - it ran poorly enough on my 350 Mhz P-II with 64 megs.

    Linux DOESN'T take full advantage of hardware.

    Linux doesn't support DirectX, and thus automatically lacks support for a lot of hardware features that are in DirectX complient hardware. The main reason was because transparant usage of hardware was a major design consideration for DirectX. It is based on the concept to support many different hardware features, have all applications use them, and then emultate those not supported by hardware. When the hardware supports new features, all apps and the OS automatically take advantage of them. Also, X doesn't have as compelete a support for many graphics operations that are possible in DirectX.


    What the HELL are you talking about? DirectX is a development API. Vendors can also write drivers which allow DirectX to use the full abilities of their hardware - just as is done with every other graphics API, such as OpenGL or Glide, both of which run on Linux fine. My games run much faster when I use Glide than DirectX - if DirectX somehow magically makes hardware faster, how do you explain that?

  • So what hardware does RedHat support that isn't supported on anyother distribution? What, none? The only reason for discrepancies in hardware support is age of the distribution. RedHat puts out a new version like clockwork, and newer technologies make their way in faster. As an example Debian releases much slower, so it may appear to support less hardware, but after installing newer versions of the shipping software it runs on everything RedHat will.

    As a little test I have set a user down with a machine, the RedHat install manual, and a cd and asked them to install it. I came back the next day and found the system working completly.

    What really seperates distributions are the tools that they ship with for configuring the system. RedHat, Mandrake and Corel focues on adding configuration tools, while Slackware doesn't focus as heavily on this. Systems that don't focus on easy end user tools aren't much of an issue because most users aren't going to start with them. If they aren't satisfied using RedHat they'll try other distributions. If they are satisfied they wont switch.
    treke

  • I don't read Windows hype, I don't read BE OS's hype, and aside from /., I don't read Linux hype. I do read Mac OSX hype, however, :-), and I've gotta say the landscape is gonna be more than a tad different after January 2001, when OSX(ten) ships. At the very least, there will be an honest-to-god modern consumer alternative to Windows and the fractured-ness that is Linux that even a /.-er can love.
  • I notice nothing in the "dispiller" about gaming sucking for Linux. This is why I don't bother with it, anymore, because the games just aren't there.
  • Aside from some dated information and inaccuracies, the problem I've had with the LMD for quite some time is that it's couched as a double negative: "Linux isn't evil foo", where foo is some undesirable characteristic. The entire flavor of the document would change if language were changed from negative to neutral or even positive. Otherwise it has this "are you still beating your wife?" flavor. The LMD would answer that question with a topic "Linux is no longer beating its wife".

    Just as an example, the document would have an entirely different flavor if the system headings under "4. Systems Myths" were:

    1. Multitasking
    2. System stability
    3. OS footprint
    4. Ease of networking
    5. System security
    6. Unix "run-alike"
    7. Unix diversity
    8. Linux and innovation
    9. Linux and multiple development tracks
    10. Y2K compliance
    11. Data security

    Suggestion is a powerful tool. The document should suggest that Linux is the cat's pyjamas. People will tend to believe.

    What part of "Gestalt" don't you understand?

  • by sillysally ( 193936 ) on Saturday August 19, 2000 @09:15AM (#842793)
    misuse of PHB, I think, a TLA that is mostly misused. In Dilbert, the pointy haired boss is either clueless and impotent, or arbitrary and capricious, but in neither case would "the facts" about linux have any effect on the outcome of his decisions.

    "suits" or "powers that be", or even Grand Poobahs... PHB does not work.

  • by nd ( 20186 ) <nacase AT gmail DOT com> on Saturday August 19, 2000 @09:16AM (#842794) Homepage
    This document is a mixed bag, with some parts being better than others. Here are some issues I found with it:

    "After getting a Linux CD, you'll probably be up and running within an hour. In the olden days, this has been true, and Linux can be made hard to install."

    Not to be a grammar nazi here (we have someone at slashdot filling that position already), but "this" is unclear, and may confuse some people about the facts, as it implies that being up and running within an hour was only true in the olden days.

    "Linux is well over twice as fast as NT"

    Generalizations like this should have no place in a document of this type, especially considering you don't back this statement up with any data. Statements like these should be what your document is fighting against. Specifically, twice as fast at what? While it's probably true that Linux 2.4 is significantly better than NT at several tasks, there are definitely situations where NT beats Linux 2.2.

    Haven't finished reading the rest of it yet.

  • You can't expect a linux zealot to speak negatively of the GNU/Linux OS anymore than you can ask Bill Gates to speak negatively about Microsoft.

    Frankly, I think the best answer to "Linux is difficult to install" is not "Linux is easy to install" which is just another opinion. Things like the possibility of FTP/NFS installs, bootable CDs, and the fact that once something's installed you don't have to look at it ever again are things that can be mentioned. It doesn't have to make any anti-linux statements, but telling the full story is a very good idea.

    Hopefully when this site becomes ready for prime-time, its contents will be based more on facts than opinions, even if it stays pro-linux.

  • If you where going to buy soap, which is more important to your decision:
    -who it is recommended by
    -results of scientific testing of effectiveness

    The problem is a Myth page that is so loaded with language that is too technical for most of the people it is trying to reach out to - the people who don't want to spend time to become educated consumers.
  • Why is it pro-linux FUD is always considered correct whereas anti-linux FUD is wrong? (I know the answer, this is /.).

    I think what we need is a non-biased version of a myths website. Something that will dispell anti-linux and anti-windows myths so people can make an informed decision about which operating system they want to use. If people were to see a site admitting that an OS has its problems, they'll probably be easier to trust.

  • No it doesn't. Compaq bought DEC a long time ago, the proper term now is "Compaq Alpha".
  • If this document is meant to be a myth dispeller, it should also include sections that address Linux's weaknesses. The released version of the Linux kernel still has SMP scalability problems, for example, and boasting about 2.4.x when it's not released is no excuse. It's important to note where Linux's strengths and weaknesses are. Linux can beat an NT machine as a server with only 64MB of RAM, but once you start throwing gigabytes worth of RAM things start to go the other way, with NT beating out Linux instead.

    Another place that Linux is weak on is on desktop systems, due largely to the fact that whereas Windows and Macintosh both have very stringent UI guidelines, Linux has virtually none. KDE and Gnome are trying to change this, although I feel the somewhat hostile environment between the two is working more to make things worse than improve it. As a developer, I should be able to choose either GTK+ or QT and have the resultant program perform similarly on either platform. And as a user, I should be able to expect that any program I install should try to act reasonably with the preferences I have set in my "prefered" desktop environment. For example, I'd love to see a unification of UI "themes", and have a toolkit independent preferences system where both KDE and Gnome settings are stored via their respective control centres, and shared between both where applicable.

    Portions of this document are no better than the FUD it attempts to refute, IMO, due to exaggerations on the behalf of the author. There's no harm in admitting weakness, in fact, it usually shows that you're trying to be honest and unbiased.

  • The threading system used under WinNT is not fully preemptive. Processes/Threads may prevent the operating system from giving control to another process by changing their priority. This reallocates their time quantum. Do it often enough and a program can effectively lock the system up. Here's the quote from www.sysinternals.com/tips.htm [sysinternals.com]
    In NT, as with most time-sharing operating systems, threads run in turns called quantums. Normally, a thread executes until its quantum runs out. The next time it is scheduled it starts with a full quantum. However, in NT a thread also gets its quantum refreshed every time its thread or process priority is set. This means that a thread can reset its quantum by calling SetThreadPriority (without changing its priority) before its turn runs out. If it continues to do this it will effectively have an infinite quantum. Why does NT do this? Its not clear, but it appears to be a bug.

    Hmmm...any ideas on how this could be abused?
  • Anon writes However, things like driver updates are (usually) just a double-click (and reboot) under Windows. This is an area Linux needs to improve in. More automation of tricky tasks, meaning standardising on ways to do things. The much-reviled Windows registry is actually a great idea. I quite like the rpm database. If all drivers came as rpms with an install script that set everything up it would make installing a matter of clicking on Driver.rpm which opens kpackage. Click on install and when the script runs you might need to enter the root password into the su-wrapper dialog box. The only difference here between win and lin is the password thing (and who would want an unauthorised user installing a driver?) In most case linux would not even require a reboot. This example assumes kpackage or something similar is installed and set-up as the default app for the .rpm extension. On most distros it is. I don't know about .deb but I guess it would be similar. The advantage of the rpm database is when it comes to un-installing - no deleting of "DLL"s that other progs need.
  • right now with my comments cut off at somewhere between 3 and 5 points i see five or six postings correcting misinformation in the document, a couple of very intelligent and insightful ones about threading and the rest about how bad "Myth Dispellor" is and how it is FUD. constructive help is always better than mean hearted criticism.
  • Unfortunately, more often than not, Compaq is associated with mass-produced consumer PC's a'la the Presario.

    DEC is associated with things like the PDPs, VAXen, and VMS (Ok, maybe we wanna forget that one).

    Which would you rather associate with your 64bit CPU?
  • I don't honestly think that the average user would even care if Linux came with source code or not. Think about it here for a second. What good is the source code to someone who doesn't have a programming background? Sure, the code is neat to poke through, and may give you an inkling of an idea about how the system runs, but what good is it to the average user? We have to remember that not every Linux user is a programmer.

    And you have to remember that it doesn't matter whether every user is a programmer. Not every user needs to contribute for the system, as a whole, to work. I'm certain you don't believe that everybody who uses Linux has contributed code for the kernel. But has every user benefited from the stability that comes from code and bugfixes contributed by others because it was free? Of course. Likewise, you don't need to convince most end-users that free software is good - you just need to convince them that people are actively contributing to development. Point them towards the kernel development mailing list. Show them the Sun press releases. Tell them about the contributions IBM is making. Explain how this came about because anybody can contribute freely. They don't need to contribute at all.

  • 4.1 Linux multitasks only as well as Windows or Mac Microsoft and Apple would have you believe that their operating systems multitask (run more than one program at once). Using the term loosely, they do. Using the term strictly, they task-switch only. Although more than one program may be opened, you may notice that sometimes the system stops responding -- perhaps while mounting (detecting) a CD, or scanning a floppy drive.

    Um, every windows since 3.1 has had premtive multitasking
  • 'It's core is Unix BSD' OS X's core is Mach. The BSD protion sits on top of that. NT's design is largely the same where the Win32 subsystem sits on top of the NT kernel. A lot was borrowed from the Mach system.

    Not quite... NT3.51 had a clean, microkernel based design. For marketing based performance reasons, 4.0 and up have a bastardized design where a lot of higher level functions bypass the microkernel and operate straight on hardware.
  • by Zagato-sama ( 79044 ) on Saturday August 19, 2000 @08:38AM (#842807) Homepage
    It's interesting to note that all the Linux Myths listed on the page are negative in nature. This is essentially the reverse of Microsoft's "Linux Myths" website. How about a website which tells people the full story?
  • Would the author be able to call himself a Linux Zealot if he didn't read /.?
  • Disclaimer - there is no detail about contact procedure for amending the Linux Myth Dispeller document, so i'm posting it here. the author's email address is included, but the implication is that this is a far more far-reaching project than one man's quick hack, so i figured i'd put it up here, both for discussion and information.

    In section 4.4, talking about the relative hard drive merits of the differing OSes, Visual C++ is quoted as taking somewhere around "100 Megs". Installing it off of Microsoft Visual Studio 6 Enterprise Edition last week, it takes around 330 Megabytes. Yes, just for MSVC++. J++ had a cd all of its own, i didnt even dare go there.

    That may not even be the worst offender, though. Symantec Cafe for Java (Database Edition) occupies around 580M on my harddrive, and i think it was just a typical installation.

    Fross
  • Gnome certainly is (serious competition to the Mac or Windows)

    This is definately a Linux myth!
  • Also, in a document like this, you probably shouldn't keep referring to yourself (e.g, "I like ...")

    The GUI/desktop myth section needs a lot of cleaning up.

    Just more constructive criticism :)
  • People don't need to know the minimum size of a kernel in bytes in order to appreciate an operating system. This isn't about technical terms, it's about opinions vs facts. If someone has a computer and is in a position to determine what operating system will go on it, chances are the person will have at least some knowledge of computers. Technical discussions aren't necessary here.

  • Okay, its improved, but I still find most X desktops clunky and slow.

    Firstly, it must be pointed out that MS Office data formats cannot be treated as "standards" under any reasonable definition of the word "standard."

    Almost 100% of offices use it, and require it. Its a de facto standard, but still a standard.

    8.1 Linux is PC exclusive"

    Just a nitpick. Looking at the debunking, this should say Linux is x86 exclusive.
  • 64 bit time holding will turn over at roughly the end of time, eh? Maybe we can use this to forcast the coming apocalypse, or the day when Windows doesn't bite anymore...perhaps we could even calculate the amount of time it will take me to get enough karma to moderate...
  • that site is far too pro-linux.

    to be taken seriously, it will have to adequately address the shortcomings of linux. I'm very pro linux, and even I stopped buying the PR on that site when I read about how Linux has more software than windows.

    a more balanced view would be much better than one of a blind advocate.

    ________

  • No, I didn't perform the setup. As I said, a friend of mine performed this experiment using another friend of his. My friend did the initial setup, but that doesn't really matter, does it. I'm not talking about the installation of the OS here. I'm talking about desktop usability. An in this arena, regardless of what ms or linus or anyone says, linux is ready for the desktop.

    -Peter
  • by elendril ( 15418 ) on Saturday August 19, 2000 @09:27AM (#842817) Homepage
    • Apple claims the new OS will be partially preemptive

    Mac OS X uses fully preemptive multitasking. It's core is Unix BSD.

    • Linux can nicely support fairly sophisticated functions, including firewalling, IP masquerading, and, overall, the creation of network environments that will indeed be very complex.
      MacOS and Windows 9x simply don't support this.

    Mac OS Open Transport is one of the most advanced networking stack, and support almost everything you can do on Linux. It's based upon Mentat Portable Streams (used on Novell NetWare, Hewlett-Packard HP-UX, IBM AIX, Compaq Tru64 UNIX...)

    • All too often, on Windows or MacOS, if a program crashes, this can readily pull down the rest of the system around it, as programs often are tightly integrated into the GUI which is tightly integrated into the OS kernel.

    Most apps do not crash linux too bad, but at least on the Linux boxes I used, X windows had a tendency to crash and take the whole system with it...

    Linux Advocacy is nice. Impartiality would be nicer. At least it would help differentiate Linux from Apple/Microsoft, whose advocacy is often far for beeing objective...

    Note for the author : how long did it take you to set up a PPP connection with the Debian ppp-config utility ? not more than 3 minutes ? or about 5 minutes ? At least be coherent.

  • an honest-to-god modern consumer alternative

    With a closed up tight hardware architecture. It really doesn't matter how spiffy Steve's new toy is. So long as Apple and crew continue to dictate what hardware will be used I will continue to wish for their downfall.
  • "Mac OS X uses fully preemptive multitasking. It's core is Unix BSD." Not exactly true. The microkernel is Mach, and BSD runs on top of that.
  • I've been using Linux since kernel version 0.99pl14...

    I love Linux, and yes Linux is a nightmare to install. Many things have improved, that's certainly true, but: I recently added an IDE CD-burner to my PC.

    I needed to: recompile the kernel, setup the SCSI emulator, find burner software... I spend half a day on this. I want Linux to improve - in most cases it's much better than Windows already. I think pretending that it is in all cases is counter-productive.

  • Three words: Magic Sysrq Key

  • That's how much space is on my harddrive after installing Windows 2000 Professional, Office 2000, Visual Studio 6.0, about a half dozen full installs of games off CD, etc.

    Disk space is cheap.
  • "Here is a few of my favorite programs that didn't come with my distribution", the man says. And the first two links don't work with the third pointing to a moved page. Tchk.
  • I recommend you go back to school and figure out the difference between the words "install" and "use".

    The "average" person cannot "install Linux". The average person cannot "install Windows". They can't install BeOS or Solaris, which are commercial products in a similar position to Linux. They can't install their DSS dish without the guy from the the store doing the alignment. They can't install a toilet without asking a plumber to do it.

    This has nothing to do with whether they can use the product.

  • I find it hard to believe they wrote something as ludricous as "2x as fast as NT". This paper is not going to help Linux at all. Lets see some reasonable arguments!

    Some tests I ran (network intensive reading and writing of files) did run more than twice as fast on Linux as on NT on identical hardware.

    Some other tests (compute intensive) ran as much as 10% slower, possibly due to poorer optimization of the gcc compiler.

  • If you ask this, then you should take some time to do some research on exactly what open vs closed source is, what Linux is, who makes it possible, etc.
  • I fail to see the difference between DirectX and OpenGL. If your card supports something that is not in DirectX, you are in exactly the same position as OpenGL.

    Perhaps you may want to rephrase your argument to "Direct X supports a more powerful graphics api" which would make sense.

    Neither DirectX or OpenGL or X or BeOS support graphics APIs that have not been invented yet!

  • I've never, ever had to have multiple glibc versions running on a machine. Also, you only need the
    base libraries for GNOME and KDE to actually run the apps - most of the stuff is only useful if
    you're using the full desktop.
    >>>>>>>
    Are you running Netscape? On a glibc2.1.3 machine, you need compatibility libraries. As for the libraries, Qt is 2.5 megs, kdesupport is 3.5 meg kdelibs is 5 meg. That's 9 meg. Assuming the RPM format uses compression, you're talking around 15 megs of libraries.

    Not in my experience. Obviously, this kind of article is just going to lead to a flame fest all around,
    but I've run RH 6.1 on a 25 Mhz 486 with 16 megs of RAM - WITH X+KDE. And it wasn't noticalbly
    slower than the Win 3.1 it replaced. Try running NT4 on something like that - it ran poorly enough
    on my 350 Mhz P-II with 64 megs.
    >>>>>>>>>>
    Obviously our experiances are different. However, Win3.1 was really bad in terms of performance, especially due to the real mode filesystem. (NT's filesystem is about 5 TIMES faster.) Also, you're comparison is uneven. KDE1.2 + X lacks a LOT of the features of NT4. My Slackware system runs GNOME 1.2, KDE 2.0b3, X4.0.1, and kernel 2.4-pre5. Then, load up the latest build of Mozilla, and you've got a system that is comparable to an NT machine. In that, it means it can run all the available software for the platform, it has an object model, and most of the features of NT's DE, and a browser (IE is always in memory if you've got on active desktop). My machine is very trimmed, 200megs before X and GNOME and KDE. (BTW. My system partition for NT4 is only 500MB, and it is presently only half filled. Apps are a different matter) That config takes up a good DEAL more RAM than NT. A lighter config, GNOME+Mozilla+X+kernel, still takes up more RAM than NT4. Linux + X + KDE1.2 also takes up (though slightly) more than NT4. At that point, it isn't even a fair comparison because NT's DE has so many more features than KDE1.2

    What the HELL are you talking about? DirectX is a development API. Vendors can also write
    drivers which allow DirectX to use the full abilities of their hardware - just as is done with every
    other graphics API, such as OpenGL or Glide, both of which run on Linux fine. My games run much
    faster when I use Glide than DirectX - if DirectX somehow magically makes hardware faster, how do
    you explain that?
    >>>>>>>>>
    You miss the point entirely. I said that DirectX apps take much fuller advantage of the hardware than Linux apps. Secondly, it is NOT possible to write OpenGL apps that automatically take full advantage of the hardware. Let me explain. Say DirectX supports rotating bitmaps, scaling them, and blurring them. If you've got a piece of hardware that supports the first two, but not the third. The developer simply writes a driver that exposes these two features, and leaves the third to DirectX. Thus a DirectX application can use all three features, though the third feature will be slow on that particular card. However, when the user upgrades their card to one that supports all three features, DirectX will automatically use the hardware version. All this will happen transparently to the application, it will just notice that these operations perform faster. Now my point is that hardware with DirectX support tends to have a lot of these features in hardware. However, most Linux APIs don't have nearly as many features as DirectX, thus you sometimes end up with situations where there is support for a featuere in the hardware, but not in the API. For example, on your Linux machine, your soundcard's 3D sound hardware is going totally unitilized. By supporting a very broad range of features, and emulating those that aren't supported by hardware, DirectX makes sure that developers use those features, and when the user upgrades there card, apps can automatically use new acceleration features. As for OpenGL vs. DirectX, it isn't. DirectX is a whole lot more than just 3D, it is more like DirectX vs. OpenGL + ALSA + + X (for overlays and input) +SVGAlib. (BTW, the second combination doesn't come close to competing with the first.) If you're talking about D3D vs. OpenGL, read my article on OSOpinion called "Is OpenGL In Touble". In short, the method that D3D uses to support features is far superior to the method used by OpenGL. Think of it this way. Say I make a graphics card. It supports vertex tweening. Now, this feature isn't a part of OpenGL, so I write an extension to OpenGL called MY_vertex_tweening_extesion. Now what happens here, is that apps can use the vertex tweening features of my card even though it isn't part of OpenGL. However, there is a problem. The extension is propriotary. Meaning that my vertex tweening extension isn't compatible with ATI's vertex tweening extension. Thus, a developer has to write code for both (often several) cases. Now, the ARB (the people who control OpenGL) has the option to make something a standard extension. Thus, there can be a standard ARB_vertex_tweening extension. That way, I can just write code for that extension, and all hardware that supports it will automatically accelerate it. However, OpenGL moves very slowly. The core API doesn't really change that much, and extensions take a long time to come out. (For example multi-texturing came out a lot later as an extension than a feature of Direct3D.) Thus, OpenGL tends to have a lot fewer standard rendering featues than Direct3D. However, extra features don't really take that much code to add. What DirectX does is support a very wide range of features. (BTW> It gets a list of features to put in by asking graphics card makers what they're going to put into their new cards, and asking software developers what features they want to use) Thus, vertex tweening is already a part of Direct3D, and any app that uses it will automatically be accelerated on any hardware that supports it. Because extensions to OpenGL take so long to get standardized, it often happens that developers (except huge people like id, but I doubt he likes coding for each extension) often just choose not to support that features, or write their own software version. Worse, a lot of developers may just code for the cards that exist now, and future cards that support that feature will be left out.
  • What the average person calls "Windows NT" is NOT "Posix Compliant". The so-called "Posix subsystem" is what is "posix compliant". Among many other fun facts about this subsystem is that it cannot read or even see files that were written by normal NT programs!

    Running Linux under VMWARE on NT would be equivalent to this so-called "Posix compiance".

    The real shame is that MicroSoft probably would not be exposed to the wrath of the CS community if they had done even rudimentary Posix-compliance correctly in NT. All they needed were raw byte file names (ie case-dependent file names), raw byte files (ie get rid of "text mode" and ^M^J newlines), use forward slashes (they do already, but fix the documentation!), some hack so you don't need colons to name objects (like having /A/foo or //disk/A/foo mean the same as A:foo), and make all of their NT "objects and services" accessible as named files so at least access() works on them, support symbolic links, and all processes have a working stdin/out. All of this would have been trivial to do and if they had done this I believe Linux would be nowhere today since they would have produced a friendly programming environment rather than the horror they did.

  • Gnome being a serious contender to Windows is not a myth, its a projected potential. As a user of both, I admit windows is somewhat easier to use (both have advantages) but I have faith in Gnome gaining serious ground over the next year to two years. Keep in mind, the look and basic feel of the win9x gui has changed very little since the Chicago days, 94 and before. What will become of Gnome now that ibm and the other big boys are behind it? and what about kde (which i like better than gnome in many ways) How much influence will they have? Will the corporate influence cause Gnome to become unnecessarily bloated and "dummyfied"? And yes, there are very positive things that could happen with the big co.s involved too. Lots of talent. The $64000 question IS: Are we chasing off a 400 pound gorilla (M$) with a 500 pound gorilla?
  • I know what Linux is, I just want to know why a driver for such a popular chipset could be so unfinished? I'm willing to bet that BeOS has a highly refined driver for this chip, considering that many people have a hobby system with this chip, and are trying out Be.
  • I would have preferred that some of the claims be backed up. e.g., "As you may know, TCP/IP (the Internet protocol family) is the best networking protocol, and is native to Unix.". Huh? Who decided that it was the best protocol? Am I to presume that it is the best protocol for every network? Likewise, I must not have heard that "Generally, Unix-like systems have a reputation of being insecure."

    Seems to me like FUD fighting FUD - if the facts are on your side, why not publish them?

  • by HomerJ ( 11142 ) on Saturday August 19, 2000 @09:33AM (#842833)
    anti-linux = FUD ?

    pro-linux = dispelling myths ?
  • I skimemd most of it, and it seemed pretty good, but there was a lot of opinion, and a few misleading parts. The only one that really stuck out was the part about "Linux only runs Linux executables". They then went on to list a slew of emulators that run on Linux. This is misleading at the least. Linux does only run Linux binaries, everything else is run by an emulator. Windows only runs Windows (and DOS) binaries. You can run emulators to run most everything else, but that doesn't mean Windows runs thoughs. In quite a few places the page seems almost like anti-FUD FUD, which is not good. Everyone knows Linux has some shortcoming, and to be taken seriously as telling the "truth" these should be pointed out as well.
  • Likewise, Windows is ready for the desktop when it's set up for you (like at the OEM) But I doubt many of the idiots out there would brave doing a Windows installation. Some will, but many won't dare.
  • I work at an application service provider that's whored itself to Sun/M$/Compaq. I've asked one of our senior R&D engineers why haven't we productized a series of services based on Linux and he gave me his one-line answer: there is no demand for it among our customer base.

    We cater to mostly Fortune 500 companies and it seems they're not interested in using Linux. If that's the case, it doesn't matter that Linux is stable, low-cost, ghostable, yadda yadda yadda. He said unless customers start demanding it like barbarians at the gate, the company won't invest money and research in creating a secure, scalable managed platform. (Linux is still used mostly as infrastructure servers in these companies, right?)

    And just for the record he's a Winbloze engineer, but he's all for anything that'll bring in profits. Even if that means introducing Linux.

  • If all he finds are non-english speaking people. :-)
    I don't mean to nit-pick at other people's choice of words, but I couldn't resist this time. :-)
    I'm sure, despite the volunteer's superb mastery of the tongue-click language, it's gonna be a bit hard for him to translate the page if he can't understand the language that it's all ready in... and, I imagine it'd be a bit hard for him to communicate on the benovolent non-english-speaking people who wish to bestow there translatory (I know it's not a word, so what? ;-)) powers upon the document, no?

    Okay, okay, I'm dumb being stupid, I just felt a compelling urge to submitt an irreverent and vain attempt to be funny, so what if I'm not actually funny? I'll be good from now on... :-)
  • FUD is when you spread misrepresentations to cast doubt on a competing product. In general I don't think this is widely done by the Linux community.

    That's a joke, right? Slashdot is the home of anti-Microsoft FUD. Not that MS is a perfect company, or Windows is a perfect product, but if you listened to many Slashdotters, you'd think that it was impossible to get ANY work done because of the constant crashes, never mind that 50-100 million people use it every day.


    --

  • now that's insightful.
    where on earth did you get that slogan? [redhatisnotlinux.org]
    bring something new to the discussion, please. It was getting interesting before you stepped in.
  • What's the last operating system YOU developed?? :P
  • Hmm...someone's getting a little over zealous with posting messages :P
  • Knowing that /. is a Linux haven, I still disagree that some of these items are myths. Particularly in the "Installation" portion of the Linux myth dispeller.

    I contend that the Linux experts have forgotten how difficult Linux can be when you're new to it. In the case I present below, this was my very first fresh installation of a UNIX/Linux system, although I'm familiar with UNIX and have been a system admin for several years now (small network).

    I have set up a single Linux machine as my firewall/masqerade and mail server, and I can say the following:

    1. While installing Linux as a workstation may be possible for the average computer user (one who could install Windows), I do not believe it to be easy. It would certainly take more than the hour or so mentioned in the myth dispeller. And unless you have significant experience as a UNIX developer and/or system admin, I am absolutely sure that you will not be able to install Linux as a server in an hour or two.
    2. I had more difficulty installing and configuring the utilities than I did installing and configuring the OS. But without the utilities, the system was pretty much useless to me. By "utilities", I mean ipchains, qpopper (a POP3 server), mail, and ftpd. And, of course, once I got into this stuff I had to do kernel recompiles.
    3. Without the linuxconf utility (provided with Red Hat 6.2) I would have been completely lost. Even so, linuxconf contained an error (or a confusion) concerning the configuration of multiple NE2000-compatible network cards. Fortunately, I found on-line help (which I believe to be one of the TRUE advantages of Linux).
    4. Even Jerry Pournelle, computer god extraordinaire, had to call an expert (see this link [byte.com]). And even he admits that it's easier to configure WinNT as a server than Linux (see this link [byte.com]).
    5. The first thing the Red Hat installation program did was ask me for a driver disk. Huh? I had no idea what it wanted. After some experimentation, I realized I could just hit cancel and continue with the installation.
    To summarize, I disagree with a good portion of the installation myths. Now that I've done it a few times, sure, I can install Linux in an hour or so with only one kernel recompile. But that first time was tough, and the second wasn't super-easy either. And while I may not be a kernel programmer or a "guru" who knows the source inside and out, I am a UNIX systems administrator and I have been a C/Assembler developer for over 10 years.

    Here are my counters to the myths:

    1. Linux is difficult to install if you have not done it before, especially if you wish to use it as anything other than a workstation. It does, however, come with some utilities that make it easier to install than other UNIX flavors.
    2. If you are installing a Linux system for the first time, make sure you have access to a UNIX/Linux expert. It will save you a lot of time and frustration.
    3. Even though Linux is open-source, there is a higher likelihood of errors (or confusions) in the installation/configuration utilities (I ran into some, as I mentioned above). The reason for this is that the open source community doesn't put as much money and effort into testing (or into ease of use) as is done by a company like Microsoft or Apple. Also while the open source community may be a massive code review resource, I don't see the installation/configuration utilities getting as much scrutiny as they would at a large company.
    4. Installation/configuration program errors (or confusing instruction) have a smaller chance of being fixed than similar errors would on Windows/MacOS, because:
      1. Someone has to notice the errors, find out who to report them to, and actually report them.
      2. Some programmer, somewhere, has to decide that he/she wants to fix this in his/her spare time. And we all know how exciting it is to work on installation/configuration programs.
  • While Linux does come with games, some office-related software, however, those do leave something to be desired, but no more than Mac System or Windows. Because Linux is really a full Unix, it comes with everything you'd seen in a a standard Unix build, too.

    The gaming options on Windows leave something to be desired when compared to Linux? Ummmmm..... FUD? I love linux, and use it daily, but lies like this aren't very likely to help growth.
  • Yes, there are some thing which could be improved, but these are minor qualms.

    The first thing that struck me was whom it was intended for. My guess is that the actual readership would probably break up as follows:

    95% - linux enthusiasts & slashdot readers
    2 % - windows supporters
    2 % - people who are handed this doc by someone they know who's a linux enthusiast, and who actually read it.
    1 % - others (PHBs, ordinary users, curious onlookers).

    I can assure you no PHB is going to read a document which devotes an entire page to describing the doc, then goes on into copyright, structural layout of the document, etc. They lose interest after 3 lines (not kidding). What you would need is an executive summary at the top, followed by bullet points describing each myth and dispelling it, in order of myths most popular.

    If the audience is to be your next door neighbor or friend who hasn't tried linux and heard these myths, I can just imagine their nonplused response. You see, people with a casual interest in something are not particularly interested in going thru minute analysis of propaganda battles. Imagine how interested you'd be if your bank handed you a brochure with a 20 page feature by feature comparison with its rival.

    Newsflash - everyday users are as interested in detailed FUD analysis of OSes as you are in the FUD analysis of banks, tax strategies, hotels, etc. You just want to use it without thinking about it too much. This document is preaching to the choir.

    If people shared our passion for debating windows vs. linux, they would already know all this stuff inside out. The whole thing is -1, redundant.

    w/m
  • The document appears to be licensed under the open content licence [opencontent.org]. If sufficient FUD can be generated about open source, someone can point to that licence as a reason not to read the document. Perhaps the creators can also release it with a proprietary licence.

    :)

  • by peterjm ( 1865 ) on Saturday August 19, 2000 @11:01AM (#842849)
    No.
    This is the second comment I've responded to about this. The question isn't wether or not some distro is ready to have joe shmoe install it...Hell my dad is on his third windows box and he's never installed it! The question is wether or not linux is ready for the desktop. period.
    I'm talking about getting a box pre-installed! my dad wouldn't know how to get any of those things working! I bought him a new video card and a flight sim for christmas, then I had to go and install everything because didn't know how to do it.
    But windows is ready for the desktop, right? Why did I have to do that? Because, being "ready for the desktop" and being "ready for j. average user to add random hardware" are two different things.

    My comment boiled down to this:
    Everything is difficult when your new. But if you don't have preconceived notions of what things should be, then you can get over the difficulty very quickly.
    end of story.

    -Peter
  • "You have just repeated *another* myth: Linux does not have good hardware support. Well I just installed on Red Hat 6.2 on my pretty recent system, and it configured everything automatically."

    Well, you've proved (in your case) that Red Hat has good hardware support, not Linux.

    Oh, and I have used Linux before as well. Sorry to burst your bubble :)

    --
  • There are many comments about Motif in this section that are just plain wrong, given the recent release of Open Motif!

    It would be very nice to see these comments corrected before this document is released. Please see www.motifzone.com
  • And Photoshop or Lightwave on an Altivec-equipped Mac will blow away any other computer running any other OS. :p
  • by tilly ( 7530 )
    As Andrew Schulman [sonic.net] conclusively proved in Unauthorized Windows 95 [informika.ru] that the 32-bit system had to constantly make calls to the 16-bit system, and therefore a single rogue 16-bit application could very easily hold hostage both the OS and all 32-bit applications.

    Microsoft's lies to the contrary notwithstanding.

    Cheers,
    Ben
  • In fact, IP masq / firewalling / etc. can be more efficient under Mac OS (classic, not X!) than Linux. Using IPNetRouter [sustworks.com], you can route at near wire speeds on a 100Mbps network. From http://www.sustworks.com/site/p rod_ipr_compare.html [sustworks.com]:

    The key to OT performance is to respond to a network interrupt and pass pointers to a STREAMS message up the stack and back down again in less time than it takes to send a 1500 byte Ethernet packet. Since routing occurs at interrupt time, it is not affected by other applications. ... And, its always nice to see the look on people's faces when you explain to them that your Mac is saving the planet by running rings around NT or Cisco routers and is way easier to configure than a linux machine!

  • From www.eruditum.org/index.html:
    we don't do windows

    lynx --head http://www.eruditum.org | grep Server
    Server: Microsoft-IIS/4.0


    Liar.
  • I see alot of complaits about the linux myth dispeller. Rember, thats not really going to do alot of good unless you let the author know. He is the one with the power to change the thing, not slashdot.
  • Here's one that I'm getting rather worried about:

    Red Hat = Linux

    I'm not saying this as a troll; I'm saying this because Red Hat honestly is perceived as equalling Linux, and I don't see Red Hat doing anything to stop it (I'm not saying they're TRYING to steal Linux either. They are just conveniently remaining quiet about this).

    Please consider adding this to the myth dispeller document. It may end up being more important than many people think.
  • by VAXman ( 96870 ) on Saturday August 19, 2000 @10:15AM (#842869)
    Since about 1997, Linux has become extremely, extremely mass mainstream, and has had write-ups in all major mass-cultural media outlets, including magazines they sell on news stands in grocecry stores, major metropolatin newspapers, and prime time news programming on television. Every second of press it gets is overwhelmingly netagtive. Every single major computer corporation, including IBM, Intel, Sun, HP, and Compaq have embraced Linux and sell primarily Linux products. Every PHM stepped on the bandwagon years and years ago, and practically every Windows and Unix installation has been ripped out and replaced by Linux. Linux IPO's have been extremely successful, and the market caps on Linux companies, as awell as Linux entrepeneurs, rivals that of any major multinational corporation. Practically every comuter user is using Linux including grandmothers, blue collar workers, and third world residents. Linux is THE buzz word of the last few years, and by far the trendiest and most fashionable new thing for the masses to latch on to.

    Yet, people think it's still oppressed, and feel the need to defend it further. What are you defending against? Linux has no detractors, except perhaps competition such as Microsoft, but only just, and that represents a tiny portion of the marketplace.

    The articles reads like it was written in 1995. Windows only cooperatively multitasks? WTF? This was trendy to discuss in 1994, but since the release of Windows 95 (not to mention NT, and 2000!) the issue has long since been resolved. The author needs to pull his head of the sand, fly back to earth, and check out everything that happened in the last five years, which he has missed.
  • by Raunchola ( 129755 ) on Saturday August 19, 2000 @10:15AM (#842870)
    You're basing your claim that "Linux is ready for the desktop!" on the experience of one person? I'm sorry, but you're jumping to conclusions. If you wanted it to be a bit more effective, why not let your friend install Red Hat on there himself? Why not see if the average person can get past a Linux installation?

    The fact isn't if people are willing to learn or not. Yes, people can learn how to use Linux if they sit down and learn it. Now how about getting devices like your sound card, your scanner, your printer, your modem, or your digital camera to work efficiently under Linux? Now we hit the snag. Some people may find Linux is easy to use, but what about device support? Yes, there is support for such devices under Linux, but it's still not as good as the device support you get under Windows. Not everything will work under Windows, but I'm willing to bet that there's a lot more that won't work under Linux.

    Sure, Linux is supposedly easy to use for the average user to toy with. But it's still behind Windows when it comes to desktop readiness. Before you spout your FUD at me, is an OS that can be easily used (after learning it) yet doesn't have device support for everyday components ready for the desktop? I don't think so.

    --
  • by be-fan ( 61476 ) on Saturday August 19, 2000 @10:15AM (#842872)
    To be fair, the site is fairly objective. However, I have to bop them on a few points.

    1) Linux may not be a nightmare to install, but it is still a nightmare to configure. The main problem is not so much that configuration is very text oriented, but it is not consistant. Some stuff is configured through user-space programs (hdparm and ifconfig.) Other stuff is configured thourgh text files. Some stuff is configured through scripts (the old rc.modules style) others are configured via stuff like modules.conf. Often, there is little feedback if you do something wrong. I still don't know what I'm doing wrong configuring ALSA.

    2) Linux multi-tasking.
    The site implies that Windows uses cooperative multi-tasking. That is simply not true. Windows95 and WindowsNT all use preemptive multi-tasking and in fact multi-task SMOOTHER than Linux. It is not so much a performance thing as a "feel" thing. The default quantum in NT is around 20 milliseconds or so on workstation, 50-100 on server. The default quantum on Linux is 50 milliseconds (newly lowered in kernel 2.4). So on Linux, each app gets a longer time slice. While this may be more efficiant, it degrades interactive performance (ie the "feel" of the system.)

    3) Linux IS too huge. In order to get the same experiance as one does with Windows, you have to use KDE or GNOME. Otherwise your competing a product with more features against one with less features. Also, if you don't use GNOME or KDE, some of the other "FUD" becomes true. To get a Linux system comparable to a Windows NT system, you have to have GNOME+KDE(both so you have full compatibility) +Mozilla+X+kernel. Not to mention the multiple versions of glibc and all the additional (often redundant) libraries all the apps use. In terms of memory usage, Linux blows NT4 out of the water (a bad thing) and is quite close to Windows 2000's bloat.

    4) Linux IS playing catch up. Most new kernel features (journaling FS, new automounter, LVM, etc) have all been implemented on previous operating systems. Not to mention the fact how much KDE and GNOME are playing catch up.

    5) Other OS kernels do NOT load everything at the same time.
    I don't know how they got this? Most of Windows is built out of DLLs which can be dynamically unloaded. Most UNIXs had modular kernels long before Linux. Microkernels like BeOS can turn off entire subsystems if they are not needed.

    6) Linux DOESN'T take full advantage of hardware.
    Linux doesn't support DirectX, and thus automatically lacks support for a lot of hardware features that are in DirectX complient hardware. The main reason was because transparant usage of hardware was a major design consideration for DirectX. It is based on the concept to support many different hardware features, have all applications use them, and then emultate those not supported by hardware. When the hardware supports new features, all apps and the OS automatically take advantage of them. Also, X doesn't have as compelete a support for many graphics operations that are possible in DirectX.

    7) Linux threads aren't all they are cracked up to be. I have seen tests show that NTs threads not only take less time to create, but switch significantly quicker. Also, the sites makes excuses for Linux's lack of threaded applications.
    FACT: Multi-threaded apps are better. They may have slightly more overhead and are more complex to write, but it really pays of for those with SMP machines. It also pays of in todays systems because of the increasing number of CPUs in the system. Not only due to SMP, but the specialized chips systems use. Graphics cards can do operations independant of the CPU, so for most graphics apps, it makes a lot of sense to have an independant display thread. Thus, the main-thread can do things while the graphics card is busy working. Same thing for 3D audio. Instead of blocking the CPU waiting for the sound card to finish working, spawn another thread and have them process together. The trend is moving towards PCs with more and more independant chips, and there is no excuse for writing single threaded applications.
    FACT: Theading on NT doesn't use cooperative multi-tasking. Where did they get that? Threads are preemptivly threaded just like applications.
    FACT: Linux doesn't use threads nearly as often as it should. By having the kernel and libraries heavily threaded, and with fine-grained locking, performance really improves.
    However, BeOS hopelessly outclasses both in the threads department. The same tests that show that NT threads switch quicker also showed that BeOS threads switch 10x quicker (that is due to the different model BeOS uses for threads. I can't find the articl at the moment, but I'll post it when I do.) Also, the kernel, servers, kits and apps are heavily multi-threaded. The API encourages apps to be multi-threaded. If you've used BeOS on SMP machines, you know how important multi-threading is.

    8) Linux really isn't that fast, depending on what you do. For server tasks it is undoubtedly a speed demon, but for desktop tasks, my NT4 machine (not to mention my BeOS machines) FEELS faster. Screens have less visible redraw, apps switch quickly from one to the other. Not to mention the fact that anything media oriented does much better on Windows than on Linux. (This is partially due to the APIs. X is really not great for fast display, OSS isn't really great for complex sound, the X input system can't compare to DirectInput, there really aren't that many MIDI APIs to speak of (at least those comparable to DirectMusic) and (as of now) 3D is STILL slower than on Windows.

    9) The Linux desktop IS clunky. It's very attractive, but the Linux guys need to steal some ideas from the Mac instead of Windows.
  • by Tim ( 686 ) <timr AT alumni DOT washington DOT edu> on Saturday August 19, 2000 @11:28AM (#842873) Homepage
    "FACT: Linux doesn't use threads nearly as often as it should. By having the kernel and libraries heavily threaded, and with fine-grained locking, performance really improves."

    Not so fast...

    There is a ton of solid evidence that fine-grained locking can kill performance, not to mention making code more complicated, harder to write for, and harder to maintain.

    Granted, threading *can* improve performance, but the impact tends to differ depending on use and the type of machine you're running on. The kernel crew has taken a moderate stance on threading, not wishing to hurt low- and mid-end hardware performance to accomodate slightly higher performance on high-end hardware. IMO, this moderation is a good thing, since it keeps the kernel and associated libraries maintainable over the long run, and it allows Linux to run on things like embedded systems, as well as mainframes, with a minimum of performance penalties on any given platform type.

    BTW, Larry McVoy (one of Linus' right-hand-men) gave a great talk at the CLIQ in Denver about this very issue. here are the slides [bitmover.com] from that talk.
  • by Raunchola ( 129755 ) on Saturday August 19, 2000 @10:21AM (#842876)
    "There will be cases where Linux is not the answer. Be the first to recognize this and offer another solution." - Linux Advocacy HOWTO [linuxdoc.org]

    No matter what side it comes from, FUD is still FUD.

    --
  • Please stop the DirectX vs. OpenGL thing. It's Direct3D vs. OpenGL. If I get flamed every time I refer to the kernel as the OS, all people mistaking DirectX for Direct3D should get flamed. Back to the hardware features thing.

    Here is adifference between Direct3D and OpenGL. Say 5 new cards support a new feature (say cubic environment mapping.) Now, the philosophy behind Direct3D is to get feedback from developers on what features they are putting in, and support as many of those as possible. Thus, cubic environment mapping is already supported in Direct3D. Since the card makers don't want to leave out Quake, they write 5 different extensions for cubic environment mapping. This is again due to the fact that OpenGL does everything by commitie (and a slow one at that.) So, in Direct3D's case, all cards that support a feature will accelerate any game that uses that features. In OpenGL's case, many new features (new as in less than 1 or 2 years old) will go unused until a standard extension is made for them. It is not only a features difference, but a difference in the way the API is built. And Direct3D does support features that haven't been IMPLEMENTED yet. This is another design goal of DirectX (in general.) MS puts in features in the API that no cards support yet. Now games can (usually) use those features because they will be emulated (it's actually slightly more complex than that, but I don't want to get into Direct3D programming). When card manufacturers DO implement this features in hardware, there will be a number of games that already take advantage of it. So, a card manufacturer can put in cubic environment mapping, a feature nobody else has, and games that use it will automatically be accelerated, even though no cards supported that feature when the game was written. This is another major difference between OpenGL and Direct3D. OpenGL doesn't get extensions to features that nobody has yet implemented. However, this is a very good thing, because if you spend your money to buy a card with this new feature, the games you have will automatically use it.
  • I was talking BeOS from a multi-threading point of view, not as an OS, but you asked for it.

    On my home PC, the thing just didn't cooperate at all. It didn't pick up the Network card, (RealTek
    809), there are no drivers for it, and it messed up the graphics quite badly. The mouse cursor looked
    like a multicoloured block. I have an NVIDIA TNT2.
    >>>>>>
    The graphics card shouldn't be a problem. I've got a TNT and it works fine. Your experiance is a bit unusual though. In my case I have a RivaTNT, a EtherFast100TX and a EtherPCI II network card, and an AWE64 soundcard. Everything was detected the first time around, I just had to put in the settings for the network, download BeNat from bebits, and I was off. From install -> network NAT server/desktop machine was about 20 minutes.

    And my sound card didn't work either. I guess
    that can be fixed, but I can't connect to the network to download the driver...beh.
    >>>>>>
    Again, your experiance is a bit unusual. Usually, though, any problems you have you can ask the beusertalk mailing list.

    Anyway, when I tested the thing on my work PC, I didn't find it useful for the "tasks" that BeOS
    claims it should be good at, at all. Sure, I managed to play 10 MP3's at once, but contrary to popular
    belief, it DID slow the system down.
    >>>>>
    What kind of system do you have? I can play 10 MP3s without even hitting 100% on the processor. Which is a little unusual though. 4.5 used to get up to 12 or 15 without pegging the processor. The trick is to not start them all at the same time, but one at a time. If you just highlight 10 MP3s and hit Open, Soundplayer goes crazy trying to load them all. And watch this. Start up 24 MP3's all at the same time. Under Windows or Linux, it's time to reboot. However, in BeOS, the system slows down, but is still usable to the point where you can easy open up some application to close the MP3s. Right now, I'm running 10MP3s, and I can still browse the 'net fine.

    When I tried to play a video, a 600mb MPG from a CD (which
    works under Windows), it didn't open it for some reason. It just refused to open large MPG files.
    >>>>>>
    Are they Sonorsen encoded? Also, the built in media player was troubles with MPEG files. Look on BeBits for a new one that's much smoother. Also, I have a CD with several large (10-25 megs) I can open 5 or 6 of them without hitting 100% on the processor.

    Another area where BeOS falls over is management. Sure it's got Telnet and SSH has been ported,
    but why the heck? I mean, you can't manage the thing remotely at all. The FTP servers and other
    servers I downloaded relied heavily on the GUI to operate. That's pretty useless.
    >>>>>>>
    The BeOS wasn't designed as a server OS. Still, I don't see what the management problem is. BeOS has Apache, and you can telnet in go to /etc, or /boot/home/config, and most of the settings files are there in text format.

    Also, it doesn't seem to have a decent browser. NetPositive was fast, but couldn't do 80% of the pages
    on the Net properly. I downloaded Opera and when I when to a Java-enabled site, it crashed the
    system - yes, crashed it. The version was 3.6(I think). The system slowed down completely at first,
    btu I did manage to bring up a window and kill the process. However, even though the Opera
    processes were killed, the system was still too slow to use and I had to reboot it.
    >>>>>>>>
    Yes Netpositive is weak. As for Opera, I think the version you used was 3.6-beta7. It's still a beta, and one that was abandoned. Still, I can use it fine without any problems. (Except Java sites, mainly because BeOS doesn't yet have a JavaVM! I'm wouldn't be suprised if they didn't bother to protect against a crash if the user loaded a Java site and the OS didn't have a VM, since this was a beta version.) However, Be is working on bringing Java2 and Opera 4.0 to BeOS. Promises? Maybe, just like widespread OpenGL on Linux?

    It seems like the best thing about BeOS is the GNU bash, and we all know that's from the FSF and
    can be found on many other OS's. BeOS fans like be-fan over here talk about great things, but deliver
    very little. Overall, and I'm not trying to put Be down - Linux or FreeBSD are better choices for the
    desktop.
    >>>>>>
    I have a triple-boot of Slackware 7.1, BeOS 5, and WindowsNT 4. I use BeOS maybe 75% of the time. I reboot into Windows to use Visual Studio, program DirectX, and use my 3D and imaging applications. I reboot into Linux to fiddle around trying to install ALSA, recompile the kernel, compile the latest build of KDE2 (since it is only in RPM and source) Right now, I'm waiting for a new version of kernel 2.4-test because 2.4-test6 seems to have broken the NVIDIA driver. Oh yea, this is a GREAT desktop OS.

    I'm not just saying that as a user who's only used Windows, Linux and FreeBSD, but as one
    who's used BeOS as well.
    >>>>>>>>
    Well, that's your opinon. From my POV, I think WindowsNT 4.0 is probably a better desktop OS than Linux, and certainly is the best PC OS available if your doing graphics. BeOS is a much nicer overall use OS than either. And BeOS probably also appeals to a broader range of people. If you the hardcore CLI-user, BeOS has Bash, POSIX, Perl, Python, Apache, and all those wonderful text-mode programs you've come to know and loathe (not to mention application scripting throgh hey). If your a Mac-type GUI user, BeOS is super-easy, and everything can be done from the GUI. If you're a Linux/X user, you'll be happy with how the GUI and the CLI really mesh together well.

    BeOS has potential, but Be need to sort out the instability, the lack of applications, (including a
    good Browser - I know you can get Mozilla for it, but you need to compile it yourself unless you want
    to use M7), and their management issues, which, for me, are the biggest issues.
    >>>>>>>>>>>
    BeOS is far from unstable. I just don't see your situation that often. Sure, you had a bad experiance with BeOS. I'm sure a lot of people had had the same with Linux. As for me, and a lot of other BeOS users, the OS is fast, stable, innovate, and works great. It just seems that the "average BeOS" experiance is closer to mine, than yours.
  • Okay, I am happy to see so much traffic concerning this document, and the only thing I will say to anyone who slammed it or me, is that it is in no way done, it is version 0.09, meaning that I tinkered with it a bunch, but in no way intend anyone to take any of it seriously yet. However, I do appreicate al lthe ideas, criticisms, comments, flames and offers of translation help. I expect to release several new versions in the coming weeks, and will probably announce them here if I can. Hopefully over the next few weeks/months I can get the most henious FUD in it all weeded out and smoothed down to the point that some serious work can be done on it. Anyone who would like to see this document evolve into something truly useful, please drop me a line and sned me your contributions.

    ***********************************************
    Jon Tillman
    LINUX USER: #141163
    ICQ: 4015362
    http://www.eruditum.org
    jon@eruditum.org
    ***********************************************
    Help Jon build a network!
    Looking for giveaway computers & parts
    Current Need: Tape Drive & PI/PII processors
    Email me to find out how you can help
    ***********************************************

  • by KnightStalker ( 1929 ) <map_sort_map@yahoo.com> on Saturday August 19, 2000 @08:50AM (#842888) Homepage
    This "myth dispeller" isn't going to do anyone any good if it remains as misleading as it is.

    For example, you can't just say "Multitasking under Windows 95 is partially preemptive." True, 16 bit apps run in a shared memory space and the GDI isn't fully reentrant, but a statement like that is just flamebait.

    Also, the statement "Hardware is often ignored by other operating systems. On the other hand, Linux takes advantage of all the hardware it can." is ridiculous.
    --
  • Preinstalled linux isn't exactly the same thing because linux is harder to configure: text files versus gui checkboxes most of the time.

    It may be marginally easier to enter, for example, your ISP's DNS server IP address into a Win9x dialog box than it is to place it into a text file in /etc, but the real barrier is that it takes a hundred times more work on the user's part than either of those to learn what the phrase "DNS server IP address" means.

    As Peter keeps saying and saying, the average home-user - Hell, the average office user too - seems to be incapable of configuring any OS, Linux, Windows 9x or you name it, with either text files or GUI checkboxes. Better documentation wouldn't help, because home users refuse to read computer documentation - in fact, half the time, they throw it all away the very first thing. Seriously. I know people who have bought new computers just because their Win9x systems have come down, as they so often seem to do, with bit-rot or registry leprosy or whatever you want to call it, so they don't boot right any more. Users like that never will learn how to do any configuration, GUI- or text-based.

    For example, how hard is it to set up a modem and a "Dial-up Networking" icon in Windows9x, GUI and all? If you are the computer nerd in a small business, then you already know the answer: way too damn hard for the average home user! That's the reason that it was important to the point of federal lawsuit for AOL to have their AOL icon pre-installed on Win9x boxes; the presumption being that if users had to run the deep and abstruse AOL "setup.exe" program off the CD instead of having the icon already present front and center when the user turns on his new PC, AOL would probably lose half of their potential customers.

    So if configuring anything on a PC is so difficult for home users, then how does it ever get done? For example, assuming that an ISP connection is not already preinstalled by the PC vendor, how do home users ever set one up? Well, either a.) some patient cubicle-slave at a help desk at AOL, EarthLink, etc., a human being talking over a 1930s-technology telephone, walks the home user, step by step, through the process of clicking all those checkboxes, or b.) some friend who has some notion what an IP address is comes over to the house and does it for the home user, or c.) the home user brings his box into work and has the office nerd do it all for him. And when the office nerd tries to explain what he's done and how to change all those little clickboxes or whatever in case the home user needs to switch to a different ISP or the ISP changes its DNS address, the home user turns away with his eyes glazed right over. I know; until last month I was that nerd. I've had guys turn glassy and spacey and dial out on their cell phones right in the middle of my explanation, which is quite offensively rude, I think. I can't tell you how much hardware I have installed thanks to my esoteric knowledge of hi-tek procedures such as putting the floppy disc in and double-clicking a:\setup.exe. If you can do this, and if on top of that you are 1337 enough to sniff around on driver CDs for a README.TXT, then these people refer to you as a "guru."

    ...but i don't think it would be a good idea for any newbie unless they have someone to help them climb the curve

    The point to the above being that at least eighty percent of users don't ever "climb the curve" at all, any more than they ever set the time-of-day clocks on their VCRs and coffee makers. For them, if you preinstall Linux, it is pretty much the same as if you preinstall Windows9x; except, of course, if they call Mindspring for help connecting or HP for help plugging in their new printer, and they say they're using Linux, then the tech support guy is likely to say "We don't support that OS.".

    Yours WDK - WKiernan@concentric.net

  • idunno. this is a little sketchy if you ask me. I do like the information that was given. I would certainly recommend it to anyone coming into the linux community for the first time

    however, there are a lot of quips in there that make the pages read like a "hey, fuck you M$" style website. This is destined to make many write it off as anti-FUD FUD, so i would urge readers to take this one with a grain of salt. All in all, however, a very interesting read.


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • by orabidoo ( 9806 ) on Saturday August 19, 2000 @10:53AM (#842901) Homepage
    FACT: Multi-threaded apps are better.

    you forgot to add: multi-threaded apps are better for some things.

    There are two ways of multi-threading an app: with threads that share everything (VM, open fds, cwd) but the stack, or with threads that share nothing but an area of shared memory. Under most OS's, the second is much slower than the first, which is why there's been this push towrdsa multi-threading in the first way. Both Solaris and NT have this problem, and when MS's interests coincide with Sun's, people just get the impression that that is "the way to go".

    OTOH, if you look at Linux (and FreeBSD, and even Plan 9), both kinds of threading work well, with very fast context switches. Under Linux, processes switch almost as fast as threads. So you can choose between the two ways of multi-threading, not by "which is faster", but by "which is more appropriate to the app".

    Programming with several execution contexts sharing a single VM space is tricky. You need very careful locking, to prevent one thread's data updates from stomping on another's. On SMP machines, you get accesses to the same physical memory from all the CPUs, which means more bus traffic needed to maintain cache coherence. Multi-threaded, shared-memory programming is only worth it if your app calls for it, i.e if your app has a lot of shared state that all the threads will be working on.

    Multi-threading with a separate VM space is considerably easier: each thread (or process -- processes are just one kind of thread) runs fully protected from interference, and you can always set up a shared memory zone for whatever shared data is needed. Each processor is mostly working on separate areas of memory, so bus contention is lower. The downside being that this kind of app tends to use more memory than the former.

    Finally, single-threaded, event-driven programming should not be counted out. It turns out to be the most appropriate, for a surprisingly large number of problems. In some cases, you're better off running several copies of a single-threaded server (say, one per CPU), than a multi-threaded one.

    IMNSHO, the worst thing that has come out of both Java and NT is the unthinking assumption that all programs should be multi-threaded, with a shared memory space, and that all servers should necessarily use one thread per connection. Yes, there are many cases where this is good, but there are also many cases where other solutions are just as good in performance, and much simpler and more maintainable in programming. Just say no to Sun's and MS's thread hype!

  • by Raunchola ( 129755 ) on Saturday August 19, 2000 @12:17PM (#842906)
    "What you get from using Linux, or other free operating systems is freedom. Freedom from corporate decisions. Decisions that are made from a view that people are consumers and should be tricked and lured into giving up their money for as little as possible. Freedom to fix the code yourself. Freedom to share the code and binaries with your friends. Freedom to get it for free. A platform you know won't change in the future on the whims of corporate greed and hype."

    And your comment brings up a serious question here. The freedom you get with Linux is great, there's no denying that. But can you sell Linux to the average user on the fact that it's free (as in speech)?

    I don't honestly think that the average user would even care if Linux came with source code or not. Think about it here for a second. What good is the source code to someone who doesn't have a programming background? Sure, the code is neat to poke through, and may give you an inkling of an idea about how the system runs, but what good is it to the average user? We have to remember that not every Linux user is a programmer.

    You may be able to sell Linux to the masses based on the fact that's it's free as in beer, but if you try selling it based on the fact that it's free as in speech, people will get confused and say, "So what?"

    --
  • by peterjm ( 1865 ) on Saturday August 19, 2000 @08:57AM (#842908)
    I recently witnessed this "sacred truth" become a myth after a friend of mine performed this experiment.

    He gave an old laptop to a buddy of his who was in need of a computer. His friends previous experience with computers was limited to double clicking on a prodigy icon on his dads computer several years ago. "The computer is free", my buddy said, "on the condition that you keep the redhat 6.2 that I've installed on there."

    At first he wasn't sure if he made a mistake imposing that condition on his gift as his phone was ringing off the hook ( "hey, how do I...?" ). But then, after a while, the phone stopped ringing. When the two of them eventually met up again, my friend left slack jawed as his buddy was talking about joining one of lugs he'd seen online after getting the internal modem working.

    So you see, the point of this convoluted little story is that linux *is* ready for the desktop. Everything is new to everyone at some point so there's no reason that you wouldn't be able to stick a brand new linux box in front of some one who's never used a computer and tell them, "hey, this is what an os is supposed to look like. okay?". But see, that's what microsoft has managed to do with their billions of dollars for people all over the world. They've said, "this is an OS. This is what an OS does. If an OS doesn't do this, then it's *difficult*. If an OS doesn't do this, then it's not ready for the desktop."
    But that's just crap. Everyone I've seen can and does learn how this OS works. You've got to get over your preconceived notions of what an OS is and go and find out for yourself. People are willing to learn. I've seen it.

    -Peter
  • Raunchola wrote:
    You're basing your claim that "Linux is ready for the desktop!" on the experience of one person? ..... Why not see if the average person can get past a Linux installation?
    A short while ago I upgraded my dual-boot box from an old Cyrix 166 to a P3/450. After swapping boards, I booted both OSes.

    The Linux side (RH6.1) took a couple of minutes and noted that the mouse had moved, and a couple of other things. After that, everything was fine.

    Windows, on the other hand, took over half an hour and a handfull of reboots, after which it was STILL having trouble. It was a couple of days later that I had all the pieces of the windows side patched back together.

    My first foray into Linux occurred because I was handed a Windows laptop that ate DAYS of my time trying to get it to work with a simple PCMCIA ether/modem card. I got to the point where an elaborate ritual was needed every time I put the box to sleep. After installing a few patches, I could put the machine to sleep, but it crashed every time I tried to shut it down(!).

    I installed RH5.1 on the laptop and spent the evening hunting down appropriate drivers. This process was FAR easier than reloading Windows. Once installed, Linux was FAR more stable than Windows. I later upgraded to 5.2

    My roommate at that time was a Windows geek. He loved windows. He thought it was the best thing since sliced bread.

    He spent 6 months as a MS-windows install expert. Every once in a while, he'd come home with frustration all over his face over an install that was simply NOT working. As someone who was specializing in MS-Windows installs he would sometimes spend a whole day trying to beat a machine's install into submission.

    When a new roommate moved in (a complete non-technofile), we started on Windows, and I weaned him over to Linux. This was mostly for my own sanity, since it was far easier to give him his own login than to f*ck around with the Win95 users kludge. It wasn't long before he was glorying in how usable and stable Linux is. I think that he almost forgot that the computer even runs windows. (I created a 'win95' command that allowed him to automagically flip over to windows. Beyond when I showed it to him, I don't think that he ever used it).

    My newest roommates are also relative computer neophytes. I gave them logins, installed the RealAudio extensions and let them loose. The hardest part was getting them started over the phone (I gave them nasty passwords) I got one running with a text editor over the phone. Since then, I haven't gotten any complaints.

    In a recent job, we installed dozens of Linux boxes of various configurations. Other than driver hunts for esotheric hardware, installation either went like a breeze, or the problem was traced to bad hardware. (firewalls and VPNs were a different story). A recent addition to our group was such an MS groupie that he helped write a bood about Win-2000. He actually complained when it looked like we were going to force him to keep Windows on his desktop. He solved the problem by installing VMware.

    A different group in the same compamy was responsible for NT/95 installations. When their chief installer wanted to install Linux, we gave him a spare install CD and didn't worry about it. It was actually that easy. He still complains about NT/95 installations.

    SUMMARY
    Windows installs are a pain, Linux installs are a breeze, Linux stability makes for user happiness. The only way that MS can get away with even claiming that Windows installs are easy is that they have people like my first roommate who pulled his hair out so that customers could be handed a nice, clean, working machine. As long as I know that I've got the apps available I'd rather hand someone a Linux box than a Windows box -- especially if I'm going to have to support it later.

  • There's a ton of things in this document I object to. For instance, under the multitasking section ("4.1 Linux multitasks only as well as Windows or Mac"), they say "Microsoft and Apple would have you believe that their operating systems multitask (run more than one program at once). Using the term loosely, they do. Using the term strictly, they task-switch only."

    Later, he goes on to say that "Microsoft Windows and MacOS started as CMT systems, and have gradually moved towards a PMT model, but still retain some of their CMT roots. Most notably, their graphical infrastructures retain structures that require cooperative/sequential behaviour." I love that phrase, Graphical Infrastructure. What a load of BS. What he neglected to mention is that all the windows for the OS itself are controlled by the same process. When the process is busy, you may have problems moving its windows around, but you should still be able to control winamp.

    On a multitasking sidenote: Under Windows 2000, you can now do things like format a floppy and drag a window at the same time. Apparently they finally figured out that you don't have to stop everything on the system to read/write a floppy.

    I also like this particular statement a lot: (Under 4.2 Linux crashes frequently) "Programs can never crash the system under Linux, because of the way it's built with things like memory protection, instruction monitoring, and other devices built into any true kernel."

    Well, bullsh*t. I've run programs and had linux crash more than once. Where do you draw the line between the application and the OS? Both linux and windows use a C library (In linux's case it's libc, like the other unices. On windows it's MSVCRT??.DLL) which, if buggy, can take down the kernel. Granted, linux is more stable than NT or 2k in my experience, but it's not like I've never run a user-space program (Nearly everything is user-space, of course) which took down linux.

    Too bad we don't live in a perfect world.

    And finally, my very favorite entry in chapter four: 4.10 Linux is fragmenting.

    And I quote:

    Difficult to answer this one as there is no credible evidence that this is happening. For the record we may point out that although the commercial Linux distributors differentiate their products, they are compatible and the various companies work together in friendly manner that is uncommon of the software world. They adhere to a common Filesystem Hierarchy Standard (that which determines the layout of the system), and they use kernels and libraries from the same series. The misconception that a standalone package has to be distributed in a different version for each Linux distribution is pure FUD.

    This is nonsense. There are now more linux distributions than there are linux users. While there exists a specification for the ideal shape of the directory hierarchy, most distributions follow it only loosely, or not at all. Redhat, Slackware, and debian all disagree as to how things should be done, with most of the distribution-packaging companies falling in behind one or another of them. It is getting better. It's still not converged.

    Now, technically, linux is a kernel, and redhat is an operating system, but linux (the kernel) is not very useful by itself in most applications, so you must look at the distribution as a whole. Linux also refers to the mass of all distributions. This is the basis of my disagreement with this guy.

    In the end, though, this article of his seems to be predominantly opinion garnished with the occasional fact. He does not cite sources and so will end up looking just as silly as Micro$haft, except for one thing; He doesn't have a big name. If he intended this to be a rebuttal to M$, then he went wide. If you don't have a big name, you have to look credible, and this doesn't.

  • 1. Linux may not be a nightmare to install, but it is still a nightmare to configure.
    I have to partly agree to this one, as it took me a few DAYS of fiddling with the XFree86Setup horror under Xfree 3.3.6 to get my monitor to work.
    However, there is no reason to believe that things aren't getting better. When I upgraded to Xfree 4.01 there was no need anymore to fiddle with the scan
    frequencies in a poorly documented text file.
    >>>>>>
    Undoubtedly things are getting better (though in the case of XFree86 configuration actually got WORSE from 3.3.6 to 4.0) but it is nowhere near Windows yet. XFree86 is a chinch to install. However, try to install ALSA with an ISA soundcard, or configure networking for multiple ethernet cards (which most distros don't do. Mandrake 7 does it, RedHat 6.1 and Slackware don't do it) get NAT working, install new drivers, etc. All of these are significantly harder in Linux than in Windows.

    2) Linux multi-tasking.
    You said that the windows multitasking feels smoother, though I personally doubt most people can tell the difference betweeen 20 msecs and 50 msecs.
    What I am more concerned about is overall system stability which is abysmal under Windows 9x and not quite perfect under Windows NT, while Linux
    systems are known to go on for years without crashing.
    >>>>>>>>>>>>>>
    Have you USED Windows NT? (4.0, not W2K) People CAN tell the difference between 20ms and 50ms (and whatever other system things keep NT multi-tasking smoother.) They might not be able to say "oh this is 20, and this is 50" but they can tell, "hey, switching between apps feels smoother." With NT, I can be running a ridiculous load (3D renderer, image editor, IDE, MP3s, etc) not switch between apps quickly and easily. More imporantly, there is little redraw and the system doesn't FEEL loaded. However, Linux to me feels much more like Windows98 than Windows NT. Apps just take longer to start up, longer to switch, and the system actually FEELS loaded under heavy load. It's a preception thing, and preception matters. In the early days of BeOS (when the OS was very immature) Be used to impress people with how fast the UI was. Back then the used tricks like really small quanta to make the OS "feel" fast. (Of course now that BeOS is matured, it really IS fast, without UI tricks.)

    3) Linux IS too huge
    Heh, I doubt Windows 2000 is much better. I agree that X is a bit of a mess and so is Mozilla, but KDE or Gnome along with the Linux kernel and modules
    don't seem to use more than 10 MBs or so of RAM on my machine. This lives room for X, Netscape, and a few other things to run quite comfortably on a 64
    MB system.
    >>>>>>
    That's the problem. Windows 2K ISN'T much better. NT4 is a LOT better. At start up, my NT 4 machine was around 18meg used. After initializing both KDE2 and the GNOME libraries my Linux machine is pushing the high 30s. NT4 + IE +plus two simple apps takes up a LOT less RAM than KDE+GNOME+X+Mozilla+Linux+two simple apps (one KDE, one GNOME)

    4) Linux IS playing catch up
    It may be true that Linux is in the process of implementing some functionality that has been lacking, but at least it's not quite hanging off the same ancestral
    x86 MSDOS code like the "consumer" Windows versions do.
    >>>>>
    What's your point? Linux is playing catch up to both commerical UNIXs and Windows.

    5) Other OS kernels do NOT load everything at the same time.
    I have nothing against Windows' dynamically linked libraries model, except when those DLL files royally screw up your system by being replaced and
    corrupted by random programs.
    >>>>>>>>
    Again what's your point? DLL files are almost exactly like Linux dynamic libraries (.so files) except Windows inanely installs them all in one directory. My point is that the FAQ makes it seem like Windows is this big monolithic OS that has all drivers compiled in and loads everything into non-pageable kernel memory whether or not it is needed. That's simply not true.

    6) Linux DOESN'T take full advantage of hardware.
    Again, there might be a little truth in this, but the main problem here are the closed standards. For example, I am forced to run on a alpha version of a
    reverse-engineered CLM driver for my winmodem, which happens to be slow and a bit buggy. Yet because the driver was open source, I was able to go in and
    work out an annoying bug myself (which locked up the whole system on a modem retrain) instead of having to pay $50 to M$ support just for the privilege of
    dialing their number.
    >>>>>>
    What does OSS have to do with anything. I stated because of the design of DirectX, Windows apps take advantage of more hardware features that Linux apps.

    7) Linux threads aren't all they are cracked up to be.
    I personally can't argue much here as I am not an expert in the Linux vs. Windows vs. BeOS thread architecture...
    8) Linux really isn't that fast, depending on what you do.
    I agree with you that X is a speed bottleneck, but that is because of its client server model. Also as I said before, my Internet experience with Linux has been
    somewhat slower, but probably because of the driver I use. My machine feels just as fast or faster for pretty much anything else.
    >>>>>>>>
    X isn't slower because of the client server model, X is slower because it is bloated and not designed with a high-power client in mind. BeOS uses a client server model as well, (in fact theoretically BeOS's display should be slower due to the use of messaging) but it's display is quite fast. GDI is decent (faster than X) but only because it is a DLL written in ASM and implanted in kernel space.

    9) The Linux desktop IS clunky
    I wouldn't exactly call it clunky, a little less intuitive perhaps. Overall it offers the same functionality, and much more (i.e. multiple virtual desktops) over
    the standard Windows desktop. I am sure that a few years (or months) of evolution can fix that.
    >>>>>>>>>>>
    Never said that the Windows interace WASN'T clunky. The linux desktops do tend to be slighly clunkier than Windows one (due to less use of context help, right click menus and lot) but both pale in comparison to the Mac and (because it takes a lot of CUEs from the Mac) GUIs.

    Also, you keep saying that these are problems a few more months of evolution won't fix. That may be true. However, just because something will be true in the future doesn't mean it's true now.
  • Bah-humbug. Give all the evidance you want, my evidance is this. For desktop usage, BeOS (which uses fine-grained locking and multi-threading) is appreciably faster than Linux. And ask any BeOS coder, BeOS code is not the least bit hard to design or write. Once you get into the mindset of writing reenterant code, you'll wonder how you ever managed to write that awefull no reenterant stuff. It's all in how you use multi-threading. There are some things that are stupid to multi-thread Other things (AI or graphics) become EASIER when multi-threaded. The problem lies with people that use multi-threading in a situation where it would be unnatural. However, there are a lot of situations in an OS where multi-threading makes sense. For example, the BeOS windowing system spawns a thread for each window. Not only does this improve performance but actually makes sense. All of this pales in comparison to the fact that multi-threading is still the best way to get increased performance on SMP machines. The fact that most PCs these days have a host of specialized processesors, the arguement for multi-threading becomes that much stronger. (Think of multi-threading as asynchronus I/O for your sound processor and graphics processor.)
  • I don't read MS's thread hype, I read Be's ;) From a desktop perspective, there are a lot of things that actually become easier when multi-threaded. In fact, MS desktop apps become better multi-threaded. Having a sperate thread per browser window, having seperate threads for each client of a display engine, having a display thread and a physics thread in games, having background threads that do autocompletion and the like in an IDE, etc. The vast majority of desktop apps really do benifet from multiple threads. The BeOS is heavily multi-threaded, and it is really easy to write multi-threaded applications for the API. Locking and such becomes almost second nature to you. If multi-threading is used where it makes sense, it actually makes the app MORE easy to write an maintain. Fortuneatly, those opportunities are everywhere in desktop apps. Either way, if multi-threading doesn't complicate an app too much, it should be implemented. (Again I'm talking from a desktop POV.) Not only do more and more people have SMP machines, but more and more computers have multiple-specialized processors. Multi-threading is the easiest way to extract more performance from those machines.
  • Windows 95 is still built on top of DOS. The 32-bit parts of the OS are pre-emptively multi-tasked. However they have to go through a 16-bit cooperatively multi-tasked core.

    Therefore you can run a whole lot of 32-bit apps and it looks pre-emptive. But run a single 16-bit application from Win 3.1 days in the mix, and the illusion evaporates.

    Cheers,
    Ben
  • Oh, I figured out your problem with the MP3s. The default media player in BeOS is a bit flaky, it tries to load everything into memory at the same time (instead of streaming.) If you download SoundPlay 4.0 from BeBits, the performance is a lot better. (Thank god, I though 5.0 was worse than 4.5 in this regard.) I can put in 14 MP3s on SoundPlay without maxing out the processor on my 300MHz 128MB machine.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...