Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Debian

Interview: Debian Project Leader Tells All 204

There are over 500 Debian maintainers today, up from 100 only a few years ago. Wichert Akkerman has been Project Leader for this brilliant, sometimes unruly (but always interesting) gang since February. Monday you posted questions for Wichert. Today you get answers. (Lots more below)

Packaging Front End
by Christopher B. Brown

Considerable improvements have gone into the "back end," apt-get; while there has been some experimentation with gnome-apt and console-apt, there doesn't seem to yet be anything that unambiguously improves on dselect in terms of functionality.

With the things that have been learned from those attempts, is there likely to be some sort of dselect-ng?

Wichert:
I really hope so. The reason that we don't have a super-glitchy-totally-awesome apt frontend at the moment is that we have nobody who is willing and able to invest the time and effort into making it. Unlike a commercial distribution, we can't just say `oh, this would be cool. You there! Write this for us and we'll give you some money.' Somebody has to decide for himself that it is an interesting project and make it. We can only encourage people to do something and be very thankful when they do. At this moment the only interactive frontends (that I know off) are dselect, gnome-apt, console-apt, Corel Update (formerly/also called get_it) and aptitude. I would like to ab^H^Huse this opportunity to invite people to write a good frontend or finish console-apt. The apt library is really powerful and does everything you want it to do, the only thing missing is the frontend...

2) When will KDE be included in Debian?
by grrussel

Now that Qt 2 is free software, under the QPL, will Debian include KDE 2 when it is released, based on Qt 2?

Wichert:
Short answer: yes. Long answer: we will include it when we're sure that all license issues have been resolved. The QPL is a major step forward over the previous license and allows you to use it in Free Software projects. There is still a slight problem though: it is not compatible with the GPL. This doesn't mean that it's not free, but it does mean that if you want to use code that has been licensed with the GPL and use it with something that is licensed under the QPL (like Qt) you have a problem. There are two ways to fix that: change the license for the GPL'ed part to add a special clause stating it is okay to do this, or replace the GPL'ed part with something under a different license. KDE has stated that they are indeed going to make or request the necessary license changes so all these problems should be fixed for KDE

Once that has been done there is nothing to stop us from including KDE in Debian. Right now we are mostly waiting for the KDE team to release the first beta of KDE 2 so we actually have something to study and package.

More from grrussel
Also, do you feel it is better to keep Linux entirely DSFG free software only, or to include software in some way restricted, such as Pine, Qt 1.x and Netscape?

Wichert:
I'm not so worried about having non-free applications for Linux. What does worry me is the explosion of different licenses. Everything used to be relatively simple when almost everything was either GPL, LGPL, BSD or MIT-licensed. But now we have the MPL, QPL, SCSL, APSL, Artistic, Wine and other licenses as well. It seems that every time someone starts a big project they want to have their own license. The result is more licenses that conflict with each other in various subtle ways. A lot of the licenses are based on the same principles, which suggests that it would be possible to replace them with a common one. I would love to see an effort in which people from various backgrounds would try to design a couple of good licenses which are acceptable by both the existing Free Software community and the commercial world.

3) Choose HURD over Linux?
by sanderb

Since you are working on both Linux (established) and the HURD (experimental), could you please tell what the advantages of using the HURD over Linux would be, once the HURD would near completion?

Wichert:
The HURD is a very different thing from Linux, and we will have to see what the advantages will be. Unlike Linux, the HURD is a micro-kernel based system. (Remember the flamewars between Linus Torvalds and Andre Tanenbaum?) This means that unlike Linux you don't have one big kernel, but a lot of very small objects working together. This gives you a very flexible system where various parts are easily replaced. Right now most of the effort is being put into stabilizing the systems and building the set of objects that will offer you a Linux-like interface to the system. Once that has been done we will probably see more interesting developments where non-Unix-like elements are being combined with the other interfaces to produce.

4) RPM vs. dpkg
by Tet

What are your feelings on RPM vs. dpkg? Would it be better for Debian to add any missing functionality to RPM, and then switch to that?

Wichert:
That's not really possible. The differences between rpm and dpkg are bigger then just the format in which packages are stored; the interesting differences are in the way relations between packages are declared and how the details of package installation and removal are done. For example, rpm allows you to have multiple versions of a single package installed. dpkg does not allow you to do that; it demands that you change the name of the package.

This might seem unlogical, but the result is that we can always be sure exactly what is installed and not have to worry about exactly which versions of a package are installed, and it allows us to upgrade and maintain multiple versions of a library in parallel. Another really big differences is the way package installation is done: with Debian packages there are separate scripts that are run before and after package installation and removal. Using those we can do all kinds of special-case upgrades, handle error-recovery in failed and aborted upgrades, etc.

More from Tet
In what way might Debian users benefit from sticking with dpkg over a modified RPM with equivalent functionality?

Wichert:
It's indeed possible to modify rpm, dpkg or create a new tool to handle both formats. Once that has been done the reason to stick to one single tool will mostly depend on what you are used to, which one is the most stable, etc.

More from Tet
From personal experience, the thing that really stood out in Debian was dselect, but that could sit on top of RPM just as well as it does on dpkg. Presumably the same applies to apt (although I haven't looked at Debian recently enough to know about apt).

Wichert:
Apt was actually designed to be reasonably independent of package format. It's indeed possible to modify it to use rpm-packages. One problem with rpm-packages is that it is harder to satisfy dependency due to the concept of file-dependencies. Instead of being able to say `I need package b to be able to use package a' it can become `I need file /usr/lib/libfoo.so.3.14 to be able to use package a' and then you'll have to find some way to scan all packages to see which ones include that file, and then you're not even sure if they have the right version of that file..

5) Slow release cycle
by Stephen

To my mind, the main problem that Debian has to sort out is its release cycle. It's one thing to have a well-tested distribution by the time it's released, but it's going too far to have packages a year or more out of date still in the current release. What steps are being taken to address this? Or is there an expectation that everyone is happy to use unstable?

Wichert:
The slow release cycle is indeed something we don't like and would like to change. There have been plenty of discussions on possible new approaches, and good proposals have been made. Unfortunately, changing the release process is difficult.

The most popular idea is a new approach we call package pools. Instead of dividing all available packages into distributions like we do now, we put them in a single big pool. Then we create a database with information about all available packages and use that for distributions.

A distribution will then be nothing more then a Packages-file which lists the packages in the distribution. This means that we don't need to do things like move packages physically from one distribution to another or maintain a forest of symlinks; all that is necessary is to create a new Packages-file. This makes it possible to create more distributions and play with new release systems.

The only thing we would need to do is determine how the list of packages for a distribution is selected. We could use it to keep stable and unstable distributions like we have now, but we can also create new distributions such as a reasonably-stable or even things like stable-with-new-X or stable-with-new-kernel, etc. Basically it gives us a lot of possiblities to create and manage distributions, which should result in shorter and more regular release cycles.

Implementing this will take time, and we want to do something for the near future as well. What we plan to do for potato once it is released is create update-packs on a regular basis. An update-pack is a set of packages that you can install on top of stable and extend or upgrade it in some way. Examples could be a Y2K pack, a GNOME pack or a KDE pack.

6) Debian GNU/FreeBSD
by ajs

I was looking over the info on the attempt to integrate FreeBSD's kernel, and was shocked to find that the people doing it were using BSD libc! Since glibc was designed with a certain amount of portability in mind, why not port glibc to FreeBSD's kernel? This would seem to be to make the overall port MUCH easier, as the rest of the Debian code should be far simpler to port to a different kernel platform, but the same libc...

Wichert:
The current idea seems to be to build a small system with the FreeBSD kernel and system utilities, and use the Linux compatibility support to run the already existing Linux userland. With this approach you don't have to work on most of the packages at all, which makes things a lot simpler. The effort is still in the beginning stages though, so things might change at some point.

7) Debian bureaucracy
by Big Dave Diode

I've been using Debian for a long time now, and I'd like to contribute back to the project. However, I've been put off by what looks to me like excessive bureaucracy and some infighting among Debian developers. Are there any plans to streamline the process to become a developer/maintainer, and the developer contribution process itself? What about fostering a more civil peer review process?

Wichert:
I won't argue that Debian has no problems at all. We have rapidly grown (100 people in 1995, 200 in 1997, and 500 now) from a small group of people to a big project. That growth has indeed forced us to establish some rules and set some guidelines. Where Debian started out with a single mailing list we now have over 50 mailing lists, a couple of teams with dedicated functions, a constitution, and policy guidelines.

This may sound excessive, but it's not as bad as it sounds. 99% of all decisions are announced and discussed on lists such as Debian-devel and Debian-project, and everybody is free to contribute to the discussion. Since all lists are public, you will also see people arguing a lot, and emotions can get heated at times, but I wouldn't call that infighting.

The new-maintainer process will indeed be revised. We have a proposal for a new process that will hopefully reduce the load for the new-maintainer team and also help new maintainers. I hope that we'll start using that soon. When we do, we will announce it on the Debian-announce list.

A lot of this is a learning process for everyone involved. How do you handle growing from a small group where everbody knows each other to a large group where you only know a couple of people? How do you assure that everybody is working together and no conflicts arise? How do you handle managing the necessary resources? How do you handle relations with companies? All very real issues, and we are learning at every of the road. A very interesting road, I might add.

8) Debian and Pentiums
by MoNsTeR

Is the Debian project planning, at any point, to create a Pentium-optimized release?

Wichert:
No. Some people have stated an interest in recompiling Debian with Pentium optimization (there are even some scripts to automate that). However we feel it doesn't make much sense for a couple of reasons:

  • if you compile something with Pentium-optimization it will be faster on an Intel Pentium, and (possibly much) slower all other chips (ie AMD chips, other versions of Pentiums, etc) or simply not work at all (on 486s for example). This means that only a small group of people will benefit.
  • it doesn't really help you: if you look at benchmarks you will see that only a relatively small number of programs benefit from this. So you do a whole lot of extra work and gain very little.
  • If you are that crazy for speed you probably want to look for either a faster system or a different architecture. Debian has been released for the alpha,for example, which gives you a fast 64-bit architecture. Or you can try powerpc or ultrasparc.
9) Debian lite
(also by MoNsTeR)

Is the Debian project planning, at any point, to create something like a Debian-lite, that includes only a core of packages such as commonly used libraries, X, popular user agents such as mutt, lftp, and lynx, essential and popular server daemons like sendmail, yp[stuff], nfs, and apache...? Basically, a distro of similar size to the more popular distros that fit easily onto one CD.

Wichert:
Yes. This closely ties in with the archive changes I mentioned earlier: with the current system it is hard to create a subset of the distribution since you will essentialy have to create a tree of symlinks into the full distribution, and managing all those symlinks is a difficult problem. Using the package pools all we would need to do is create a set of guidelines for what should be in Debian-lite and the system will build it automatically.

10) Core
by Anonymous Coward

What do you feel about Corel Linux, Stormix, and other Debian-based distributions? Do you think Debian may eventually form a common core or base OS that others build distributions on top of?

Wichert:
I'm quite happy with all the Debian-based distributions. For both sides it is an unusual situation: the people basing their system on Debian can start off with a complete distribution and they don't have to worry about maintaining that, and can focus and their specific goals. Most of the derivates also release all their changes, which means we get a lot of bugfixes and new things such as graphical installers, management tools, etc. We can use those to improve Debian again, and the cycle is round.

You could call Debian the common core on which Corel Linux, Stormix and others are based in much the same way Red Hat is the common core of Mandrake and others. Some people have mentioned it might be a good idea to use the Debian base-system as a common core for the LSB. Since Debian is a community effort this does make some sense since it means nobody will have complete control of it. If that is the best approach, or if others (or even the LSB) will accept such a situation is something I'm not sure of. We will have to see what happens.

----------

Next week's interview: Curtis Chong, Director of Technology for the National Federation of the Blind, discusses computer and Internet disability issues.

This discussion has been archived. No new comments can be posted.

Interview: Debian Project Leader Tells All

Comments Filter:
  • by slashdot-terminal ( 83882 ) on Friday December 03, 1999 @07:06AM (#1483109) Homepage
    I have found linux bliss in debian I just wish someone would release a CD a little sooner with some of the more recent packages like an updated version of some more of the packages. I have slink 2.1 r 3 but sometimes it is a little difficult to get the packages from the net.
  • by wichert ( 6157 ) on Friday December 03, 1999 @07:08AM (#1483110) Homepage
    sorry, couldn't resist :)
  • The question is, if Linux becomes very popular, would it still be Linux?

    There are probably many people who would not like to see the power of Linux in the hands of "the unwashed masses" and leave.

    Will this happen?

    "The mistake of the young is to think that knowledge can replace experience.
    The mistake of the old is to thing experience can replace knowledge"

  • Good morning!

    Just a quick question: What is the meaning of the word "debian"?

    Thanks!

    E
  • Debian is a combination of the names `Deborah'
    and `Ian'. It was chosen by Ian Murdock who started the Debian project.
  • I didn't see any of the questions about Bastille Linux versus Debian, or if there are any plans to try to do a secure version of Debian, so that the Linux community can finally have something that might be approaching the inherent default security that is available with OpenBSD.

    What about it, guys?
  • The question is, if Linux becomes very popular, would it still be Linux?

    I think it would nothing changes from it's core.

    There are probably many people who would not like to see the power of Linux in the hands of "the unwashed masses" and leave.

    For what? to something that has a more difficult method of doing things? I see most of the people using linux as a replacement for development and such from the windows experience and not for the arcane nature (only) of it.

    Will this happen?

    Well in computer science they use windows machines in a great deal of universities for general programming has this actually hurt the practice of standard nature of the ANSI standard of C++ as most people program it. I guess if they stress the win32 API but then that isn't usually covered in CS courses (at least not in the ones I know of). It may be with a few people but the core of the people who made it a good system will remain. Most of the alternatives are not as vocal with recruiting members as the linux community. The alternatives are arcane and wish to stay that way. So I see no real decrease of followers.
  • by Ray Dassen ( 3291 ) on Friday December 03, 1999 @07:17AM (#1483118) Homepage
    See http://www.debian.org/doc/FAQ/ [debian.org]:

    How does one pronounce Debian and what does this word mean?

    The project name is pronounced Deb'-ian, with a short e, and emphasis on the first syllable. This word is a contraction of the names of Debra and Ian Murdock, who founded the project. (Dictionaries seem to offer some ambiguity in the pronunciation of Ian (!), but Ian prefers ee'-an.)

  • That was a very good interview, and very informative. As a die-hard everything-but-debian user (by accident, I've just never happened to have used debian or any of its derivatives), I'm very tempted to drop debian on a machine or two. The release history seems to suck, but isnt that fixed by using something like apt to update all of your packages? The pool of fresh packages seems like an awesome idea.
  • Bastille Linux is a secure addon to redhat, i believe. You can find this at http://bastille-linux.sourceforge .net/bastille.html [sourceforge.net]
  • I didn't see any of the questions about Bastille Linux versus Debian, or if there are any plans to try to do a secure version of Debian, so that the Linux community can finally have something that might be approaching the inherent
    default security that is available with OpenBSD.


    If you wish to make debian secure there are numberous packages that are in the non-us directory on the main ftp site (and mirrors) which can make it secure. As far as making a "secure distribution" that is similar to OpenBSD that would create too many problems with being able to export the standard core part of the OS. Therefore it is seperate.
  • That was a very good interview, and very informative. As a die-hard everything-but-debian user (by accident, I've just never happened to have used debian or any of its derivatives), I'm very tempted to drop debian on a machine or
    two. The release history seems to suck, but isnt that fixed by using something like apt to update all of your packages? The pool of fresh packages seems like an awesome idea.


    Quite easy. I use floppies to upgrade most of my packages and I have a reasonably up to date system just by using dpkg -i package.deb. Works great with just a couple of glitches with a few packages that don't want to uninstall (the different povray front-ends).
  • The question is, if Linux becomes very popular, would it still be Linux?

    Depends on your definition of "very popular."

    Nonetheless, I don't think it "would still be Linux" in the same sense. IMO, for it to become "very popular" a solid release/version/what-have-you would have to be released which had support out the a$$ -- that normal idiots could get to work. A version like this is not freeforming and "open," as I see it. So, it would not be the ever-changing linux of today.

    -d9
  • Depends on your definition of "very popular."

    As popular as windows it I guess.

    Nonetheless, I don't think it "would still be Linux" in the same sense. IMO, for it to become "very popular" a solid release/version/what-have-you would have to be released which had support out the a$$ -- that normal idiots could
    get to work. A version like this is not freeforming and "open," as I see it. So, it would not be the ever-changing linux of today.


    Hmm.. Well we have "upgrade" versions of Windows 95/98 and We have NT service packs. It could happen. However that would still not be the only version of linux out there at all. Debian will always be there. The most likely candiate for my description would be Red Hat of maybe Caldera. This however would not really cause it to be stangant.
  • Bastille Linux is a secure addon to redhat, i believe. You can find this at http://bastille-linux.sourceforge .net/bastille.html

    Interesting how they name the most secure version of linux after a rather infamous French prison that was sacked rather easily by a bunch of angry peseants. Oh well I could have chosen a rather non-ironic name.
  • Do you realize you're the "first post" poster in the history of slashdot to be marked up?

    How does it feel, wichert, to have accomplished something previously thought to be impossible to accomplish on slashdot?

  • by autechre ( 121980 ) on Friday December 03, 1999 @07:46AM (#1483136) Homepage
    As a RedHat-only person for a while, I've recently begun using Debian. It's GREAT.

    Basically, you give it a pool of FTP sites, from which you want to choose packages. When you want to install a package, you tell it. If there are any dependent packages not installed, it asks your permission, then installs them. As packages are installed, it asks you questions so that they'll be configured and ready-to-run.

    I use the unstable tree, and it's not :) For the most part, "unstable" right now means "sometimes dependencies might be wrong, oops". I've never gotten a broken package, or had any trouble using potato rather than slink.

    apt is indeed incredibly powerful. I, for one, think that console-apt is DONE. It does everything I need it to, AT THE CONSOLE, with what is (to me) a great interface.
  • The release history seems to suck, but isnt that fixed by using something like apt to update all of your packages?
    To some extent. In theory one ought to be able to upgrade one package at a time. But there's a particular problem at the moment as stable uses libc 2.0.7 but the packages in unstable use libc 2.1, and so won't run on a "stable" system.
  • by mochaone ( 59034 ) on Friday December 03, 1999 @07:51AM (#1483141)
    by Christopher B. Brown Considerable improvements have gone into the "back end," apt-get; while there has been some experimentation with gnome-apt and console-apt, there doesn't seem to yet be anything that unambiguously improves on dselect in terms of functionality. With the things that have been learned from those attempts, is there likely to be some sort of dselect-ng?

    Wichert: I really hope so. The reason that we don't have a super-glitchy-totally-awesome apt frontend at the moment is that we have nobody who is willing and able to invest the time and effort into making it. Unlike a commercial distribution, we can't just say `oh, this would be cool. You there! Write this for us and we'll give you some money.' Somebody has to decide for himself that it is an interesting project and make it. We can only encourage people to do something and be very thankful when they do.

    This is the exact reason why a Redhat is needed. Now that they have the market capitalization they don't need to wait until someone wants to work on a particular project. They have talented people on the payroll that they can assign to work on needed technology. The Open Source attitude has been very successful to date, particularly because of the attitude expressed by Wichert that solutions arise from need and interest, but to grow Linux so that it can truly compete with MS and others, it needs to support technologies quickly. No more of this USB in the 2.3 kernel stuff or Firewire being supported who knows when. Linux needs to take off its training wheels and go play with the big boys. Adopting a big business attitude that can coexist peacefully with its Open Source origins will get it there and is Linux's greatest challenge.

  • Indeed. I've long thought there to be a fine line between troll/flamebait and funny, but a first post? I... ah... never mind.
  • Hear hear!

    CmdrTaco, we need the Slash code!
  • To some extent. In theory one ought to be able to upgrade one package at a time. But there's a particular problem at the moment as stable uses libc 2.0.7 but the packages in unstable use libc 2.1, and so won't run on a "stable" system.

    I think that the most important update is the one for libc. Just upgrade the libc version and you will be set. Right now you get just download it on a floppy and install as follows

    prompt~# mount -t msdos /dev/fd0 /floppy
    prompt~# dpkg -i /floppy/libc.deb

    or something similar. Solves all your problems really quickly.
  • I have the feeling you've never actually tried Debian, and I can fully understand your "support" of Red Hat if that's the case. Debian's distribution mechanism (not any specific release, but the development model) is, by far, the most advanced out there. Red Hat won't soon equal the number of packages and flexibility of Debian, billions of dollars or not, because they just don't have the masses of developers.

    I find it odd how you make a training wheels analogy, and single out Red Hat (these days, the "first" distribution of the masses) as the company most likely to take them off.

    --
  • by Chalst ( 57653 ) on Friday December 03, 1999 @08:02AM (#1483146) Homepage Journal
    My initial thoughts about grafting Debian on top of FreeBSD were:
    uh-oh, the `pile anything on top of each other approach' to package
    management is coming to the BSD world.

    On reflection I think it is a good idea: getting Debian to work of
    top of FreeBSD is a good test of the Linux compatibility mode. Its a
    shame the Debian effort doesn't look very ambitious, but I guess its a
    voluntary effort. A port of glibc would be a very good thing for
    BSD...

  • by jajuka ( 75616 ) on Friday December 03, 1999 @08:03AM (#1483147)
    One constantly hears conflicting opinions on pentium optimiaztion. Advocates say it will give you a performance hit on 486s and the like but you should see an improvement on any pentium class machine. Opponents frequently say it wont run at all on a 486 and you'll see no improvement or a hit on AMD or other p class chips. I'd like to see these benchmarks people keep mentioning.

    As for Wichert's statement that it only really helps with certain programs, I'd like to say the ones it does help with seem to be important ones. I'm currently using a Dual PII box on which I have run both Debian and Stampede, and while I didnt sit around doing benchmarks all day, I did notice a very large difference in the performance of KDE. Even if pentium optimization only helps out with things like KDE I'd say it's worth it.


  • The apt library is really powerful and does everything you want it to do,



    In fact, the poor and out of date libapt documentation aside, this isn't true. Among other things, libapt isn't good at:

    -> keeping track of automatically performed selections and removals; each frontend has to do this itself
    -> allowing in-progress downloads to be manipulated on a fine-grained level (ie, cancelling individual jobs)
    -> keeping persistent state across program runs: knowing, for example, that the user specifically asked for a particular program to be held back at a lower version than the newest available.

    Not that it isn't a wonderful tool, but it's not perfect -- not by a long shot.

    Daniel

    Shameless plug: I have a very early prototype of an alternative apt frontend at
    aptitude.sourceforge.net

    . Please download it and send me comments so I can fix problems in the next version!
  • I've been living off unstable for over a year now, from hamm through slink now on potato. :-) It did break a few times.. some people probably remember that notorious bash/libreadline breakage, for example, and the dependency nightmares when migrating from libc5 to glibc2 (libc6). But other than the occasional glitch, "unstable" is pretty much stable, and up-to-date system. One thing I learned is that when upgrading "critical" packages like bash/libreadline, libc6, dpkg, to name a few, these should be done separately from upgrading the rest of the system and monitored more carefully. This way, you get to immediately fix problems with the critical packages, without having your system crippled by other partially-upgraded packages whose installation stopped because one of the "critical" packages broke something.

  • Okay....

    Things like this make me happy not to live in America. How about an option on Slashdot where posts can be moderated to 'free speech -1' and so we can select not to view the posts that are here due to free speech (but still view low rated posts). The fact is that the poster has no responsibilities for the other people on Slashdot, so the overall experience is ruined for those people. With rights comes responsibilities. If you abuse your responsibilities you should lose your rights!

    Just my opinion anyway.

    Nice to see that Debian is moving towards a FreeBSD style packaging system, but looking even more advanced. A hierarchical packaging system would kick ass:

    • All
      • Server
      • Mail
      • Web
      • Other
    • Games
      • Terminal
      • X
      • SVGAlib
    • KDE
      • Basic Components
      • Games
      • Apps
    • Gnome
      • You get the idea

    So you could include as many or as few of the packages as you wanted in a distribution, that would be great.

  • I still don't understand why KDE and related programs can't be packaged under "non-free" and distributed as such. QT doesn't seem that much less open than, say the pine sources or the GIF plugins for gimp. What am I missing here?
  • by Gurlia ( 110988 ) on Friday December 03, 1999 @08:28AM (#1483153)

    Well, there are both sides to the story. OT1H my personal choice of Linux distro will always be Debian, because I think the Debian philosophy is closest to Linux's Open Source roots: everything done by volunteers, no commercial $$$ bottomline driving you to compromise, but genuinely interested programmers and developers adding quality to the system.

    OTOH being a volunteer community means that if nobody's willing to volunteer in a particular area, you've a problem. IMHO this is where commercial distros like RedHat are a good "motivation": they are commercial, so they can hire people to do a job that they see is necessary, but where nobody seems to be volunteering. As a result, the commercial distro gets the new feature/tool, which causes somebody out there to think, "hey won't it be nice if we had a similar tool in our free distro". Bingo, now you have a volunteer to do the job. (Of course, in most scenarios like this we'd probably just adapt the package from the commercial distro, assuming it's GPL'd, so there's no reduplication of effort. Another beauty of Open Source.)

  • I have worked with Red Hat, Debian, SLS (forgot about that one didn't ya), Slackware, Caldera, kha0s, and several other distros. I think, and this is personal opinion, that the Debian way of using apt-get with the .deb files is the best way to maintain machines. I have an crontab entry that runs every night to update the list of packages out there and I can upgrade my system or install a new package very quickly and easily.

    Debian is the Linux for the Community by the Community. Whereas Caldera & Red Hat have money and developers that can be put on any project (application/utility) that their boss tells them to. I can understand why Red Hat has a nicer looking installer, but functionality wise there is no way they can beat Debian. If someone could just write the front end the life would be good for Debian.

    Thanks for listening to my ramblings,
    Scott

    Scott
    C{E,F,O,T}O
    sboss dot net
    email: scott@sboss.net
  • It's called Alien, and I can't remember anything about it except I found it on Freshmeat a while ago.

    The one time I used it to convert from dpkg to rpm it worked fine.
  • by ripcrd ( 31538 ) on Friday December 03, 1999 @08:32AM (#1483156)
    First off, IANAL, but how about calling it the COSL (see subject) or just CPL (Commercial Public License. COSL is probably most accurate.
    Now get reps from several companies (IBM, Sun, Troll, Redhat, etc.) that are interested in doing Open Source work and allow one Contract law lawyer each in a room with ESR, Bruce Perens, and the GPL lawyers. Let them hash out the items that they feel a company needs in order to release a product with the source code available! Don't make it mandatory that the license be used for every product, but if they want the Open Source seal of approval or whatever then the source MUST be available!
    The software the license applies to could be shareware, demo, free (gratis), or regular off- the-shelf-pay-me-fifty-bucks software, but the source must be available as well as a method for submitting Bug reports publicly and public request for feature improvements/changes. I'm not saying the company would have to accept each and every bug fix to include in the next release, but they should consider them and use their programming standards to fix the problem in a reasonable amount of time. And test that it doesn't break something else. (Hello, Microsoft products.) My idea about feature changes/requests is that if the users have a hard time getting to or using a feature that is used all the time, then this would improve useablity. Kind of like Ergonomics, also if the company mistakenly moves a feature around in the menus from one release to the next without improving useablity I want to be able to let them know. (In IE 4.0 the internet options are in one place and in a different one in IE 5.0. Hello MS are you listening, didn't think so. They also change the name of features for no reason.) Basically I want the software companies to act differently than MS. Big surprise!
    Now, make the license available on the net for peer review and later for use by any Commercial company that wants to do Open Source software. They can just place their companies name and the name of the software in a blank in the contract. Just like the generic standard mortgage contracts.
    Well, that's my two cents, I'll crawl back in my cave now.
  • Now, I haven't used Debian, but how do the scripts run before and after (un)installs differ from RPM pre and post scripts?
  • by Hawke ( 1719 ) <kilpatds@oppositelock.org> on Friday December 03, 1999 @08:37AM (#1483160) Homepage Journal
    From Above:
    Another really big differences is the way package installation is done: with Debian packages there are separate scripts that are run before and after package installation and removal. Using those we can do all kinds of special-case upgrades, handle error-recovery in failed and aborted upgrades, etc.

    You mean like the %pre, %post, %preun and %postun scripts in RPM?

  • Perhaps Corel and Stormix can assist with the development of new tools for debian package management, such as the GUI or new version of dselect. Or other Debian specific packages. I like the idea of Debian, but it's currently too far out of date for me to use.
  • Just a little bit of advice... Try reading the article before moderating posts. In this case, what you fail to realise and that is glaringly obvious to anyone who read the article, is that wichert is said Debian Project Leader. The fact that he got the first post is quite a clever and amusing joke. So stop marking him as flamebait, troll or offtopic, and understand the context.

    Wish I didn't have to explain all this, but I want to believe the moderation system actually works, and isn't being used mainly by people whose sole concern is to mark posts according to a blind logic of popularity.

  • Who cares if it becomes popular? That adds eyeballs, bug reports, etc. It forces software companies to listen to users and create games and apps for Linux. Is this a Bad Thing? If all the voulunteers take off it may change, but companies like Red Hat can take up the slack or hire them.Unless you make all your income from Win95 support then it would be a Good Thing for the "Unwashed Masses" to be using it. They may use Linux for Dummies (TM), or something with a standard GUI frontend (how about Linux LSB Distro)but it would still be something that you could get under the hood and fix or give them the keys to once they learned what the hell they were doing. "Will this happen?" It already is. Amen, brother.
  • by Anonymous Coward
    would Maginot Line Linux been better?
  • Ok, so he didn't quite have the right analogy, I think he's right. He also latched onto the fact that there are other things besides the UI that is holding Linux back. There are certain things that are keeping MS Windows alive. If there were an alternate that met these conditions there would be flocks going to it, even average consumers. This is our goal as the Linux community: "Beat Microsoft by being better than Microsoft." These are the things I see:
    1. Software support - The average consumer wants lots of software. This is one of the bigest things holding challengers back. Even Macintosh is held back by this by a large extent.
    2. Hardware Support - If an OS isn't up with the new hardware that comes out at a sometimes frenzied pace, MS will sell big becuase it has the hardware support. People like new toys.
    3. Good instalation and update - Only give users choices they can understand. There are definite classes of users; newbie, partially seasoned, seasoned and power users. They are typically willing to identify what class they feel they're in. Capitalize on their assessment, give them help, give them more online help when they ask, and correct their assessment if you see they're asking for too much un-offered help.
    4. Pretty User Interface - The Companies who build software with nice interfaces spend many hours studying the average user. I think this is the only way to understand them. They don't think like programmers, and they're not neccessarily going to gripe to the people who can change the the interface.

    So where does Linux stand? Argue as much as you want, but I think we're behind the curve on all these areas, but we'll catch up. Take off your evangelist hat and ask yourself, "what would my parents or grandparents do with this?"

    Make no mistake, those are big tasks, some of the tasks (like guiding a user at the proper level, polishing a UI.) aren't typically the ones that we as programmers flock to, we'd rather get something done rather than make sure that everyone else can do it too. At some point we'll get more help from the hardware and software industry at large. After all, even MS doesn't write all the software and drivers for its OS. Maybe it takes some money to get Linux to a place so we can get up some more steam. Red Hat is simply the first company with enough money to start working on these things.

  • Agreed, which is why I asked the question.

    The one I've heard most often, and the one that makes the most sense to me is that pentium-optimized binaries will not run at all on 486's. Personally, I don't see this is a problem any more, since I can't fathom that any significant number of linux boxes are 486's anymore.

    As for optimizations not making much of a difference, that's what I said when it was first mentioned to me. I thought, "yeah, right. what can a few compiler optimizations *really* do for most of my programs?" Then I tried it. I tried Stampede, and now run TurboLinux and I'll tell you, the difference can be huge. I said this in a reply post in the original story, but I'll repeat it here. Comparing MP3 encoding speed using BladeEnc, my Celeron 450 beats my friend's dual Celeron 466 by about 25% in encoding speed. At the time of the test, he was running Debian 2.1, I was (and still am) running TurboLinux 3.6. Lots of people complain about how many CPU cycles XMMS chews up. Me? I compiled it with -06 -mpentiumpro and top shows each xmms thread using 0.0% of my CPU time. For each program, it might make only a small difference, but it tends to add up.

    MoNsTeR
  • I suspect the first poster to receive a total of 17 moderator points (and counting) attached to a single post...
  • by Anonymous Coward
    ok here is my question

    Redhat makes most of its money by selling user
    support correct?

    so what is compelling them to make a quality
    product that is easy to use?

    If they made a great distrabution that had no
    problems and was easy to use they would go out of
    bussness.

    I say this as a relitivly new user who has had a
    lot of problems with redhat that I don't
    belive should be there
  • I was pleased to see that the idea of update packages was raised in the interview. It seems to me that one might want a very different upgrade philosophy for certain fast-developing sets of interrelated applications (e.g. GNOME) than for other more stable programs (e.g. tetex). For the former, the rapid pace of improvements in functionality and stability make it desirable to upgrade relatively frequently, even at a small stability cost with respect to distribution testing etc. For the latter, the current "make-it-bulletproof-before-release" philosophy of Debian seems ideal.

    Thus it would be nice to have a way to decouple the upgrade process for sets of applications like GNOME from that for sets of applications like tetex. The upgrade-pack idea seems to be a fine solution.

  • You don't have to release source to be DFSG-compliant (Open Source). You just have to allow the source to be distributed as wide as the binaries. Since Slashdot binaries are not available, Slashdot source doesn't need to be either.
  • but to grow Linux so that it can truly compete with MS and others

    The major reason Linux specifically and free source software in general is more robust and useful than proprietary software is because the authors WANTED to write it, and WANT to work on it, and it's written in public where everyone can see it. Now if RedHat or Corel wants to pay someone to write free source software, that's fine, but it doesn't have the same motivation.

    Quality will suffer. Innovation will suffer.

    Linux needs to take off its training wheels and go play with the big boys

    Common M$ FUD here. Linux got to where it is with your so-called "training wheels" and doesn't need to take anything off to keep on going. Success with Linux is measured in how well it works, not in inflated market capitalizations.

    --
  • The reason for this is that, technically, KDE is illegal. pine and libgif (or gimp-nonfree) are just that - non-free. However, there are legal grounds for modification and redistribution (or lack thereof).

    KDE, on the other hand, is linking GPLd source (KDE) with QPLd source (Qt). The QPL is not compliant with the GPL, however, and you reach a slew of legal problems were this issue to come up in court. Now, like they said, KDE 2.0 should either change the QPL a bit to be GPL-compliant, or should change the KDE license to contain a QPL exception clause (much like the one LyX has right now).

  • Read the licence. Qt lib is definately not gpl or DFSG compliant. KDE is non-free and belongs in non-free. If Troll Tech feels so inclined to review and change thier licence to be compliant with the GPL they are free to do so. Same for GIF and the WU licence
  • by Anderson ( 8807 ) on Friday December 03, 1999 @09:38AM (#1483177)
    Okay, here's some perspective on this from a computer architecture standpoint. "Pentium Optimization" is only some specialized scheduling to take into account the weird structural conflicts that arise in the original Pentium (and MMX) chip from Intel. This can give as much as a 30% performance increase (that's the number Intel likes to bandy about, so add salt) on integer code. But this -only- applies to the original Pentium and the Pentium MMX. Any other CPU out there gets no benefit, and some (e.g. 486's, AMD K5, Cyrix 5x86) actually slow down either from the increased memory bandwidth utilization, or because their own internal resource usage requirements conflict with those of the original Pentium.

    Now the fun part -- what about the K6, the Cyrix 6x86, Pentium Pro/II/III/Celeron, and the Athlon chips? Well, you can schedule specifically for them as well, but you won't see as large of a performance gain. The reason is that all of these CPUs have much better on-chip dynamic scheduling (out-of-order execution, register renaming, speculative execution, etc.) and thus don't need really good scheduling back-ends to achieve fast performance. This is especially true of the Athlon and Intel P6 core chips -- if you didn't know, the Pentium Pro, Pentium II, Pentium III, and Celeron are all very close architecturally, and are known as the P6 series (and oddly, they have almost -nothing- in common with the original Pentium and Pentium MMX).

    When are these machine-specific optimizations important? Well, for compute-intensive stuff that you execute a lot. So if you want 99% of the benefit of using a compiler that schedules for your CPU, recompile your C libraries, your X server, and any large applications (KDE is a good one). I will say that a lot of the performance difference you probably see between a "Pentium optimized" distribution like Mandrake or Stampede and something like Debian is not really due to scheduling for the Pentium -- it's probably in the makefiles used for compilation, in the choice of compiler code optimizations like -O2 vs. -O3 (the fastest instructions, after all, are the ones you don't execute :), etc. That doesn't mean "Pentium optimization" doesn't help, but it only makes a major difference for the original Pentium and Pentium MMX, AFAIK. The dynamic scheduling hardware in more recent CPUs can in effect "Pentium optimize" (e.g. reschedule) any code they encounter on the fly.

    Also, don't discount the user perception optimization factor. Tiny differences in latency, load speed, and just the knowledge that you're using a "pentium optimized" distribution can make a large perceived difference in the speed of a system, regardless of the actual performance delta.
  • The person who posted this is the same guy that was interviewed. That's right! He's the featured man of the day.

    It's FUNNY. Just mark the think up to 5 and leave it there. Right now the struggle between moderators is wasting a lot of points.

    This article posted with my default of +2 in an effort to make it appear close to the FUNNY comment. Please leave it at +2 for that purpose.

    Thanks
  • To elaborate on my other post below, while I don't have the numbers to back it up I'd be willing to bet that 95% of your performance difference is both in the -O6 and the version of gcc you're using, not the -mpentiumpro switch (although that helps a bit). Perhaps Debian should be more aggressive in their use of compiler optimizations (standard Debian flags are -O2 -pg, then strip the binaries), but again, I don't think architecture-specific optimizations are the real difference here. Especially not for Celerons, which have robust dynamic scheduling onboard.
  • If I understand correctly, OpenBSD audits all source code added to a distro. I can't imagine that too many distros could afford the effort and time this takes, let alone have the capability of doing it properly.

    That said, Debian installs by default or offers a number of critical packages that are not necessarily mainstream (but should be), presumably because security and ease of configuration are being kept in mind. Take for instance, the OpenBSD ftp server, ssh, a number of SSL'd tools like telnet, exim as a mail transport, gpg, mason, etc.

    Debian's security depends upon the maintainers of individual packages, and my impression is that these maintainers are qualified and experienced individuals. Security may not be the ultimate priority, but it is not exactly ignored, either. A great indicator is the speed in which new packages are issued in Potato, especially when a security bug has been reported.

    It seems to me that a separate "secure version" is something to be avoided. Ideally, the default version would already be secure. Remember, too, that most security holes are the result of poor administration. I'd say that the default configurations in Debian are a pretty good balance of usefullness and security. However, an admin who doesn't set up firewalling and lock down unused ports is being naive, and no distro can prevent that. Someone who specifically demands a secure version is probably a little too paranoid about others and should be more paranoid about themselves.

  • I think of this idea like this. Imagine I live in a small town with one doctor/pharmacist. He is the only accessible medical professional. Now imagine that the whole town gets a mildly irritating rash. Not deadly, not even really a problem, just irritating. The doc might not stock calamine(sp) lotion . . . that is until he gets the rash. Then the itch hits him. The problem is that the whole town could be itching, but until he gets the rash, nothing is done about it.

    That's kind of the way that OSS works now. There can be tons of people who have an itch: easy install, simple modem config, etc., but until someone with the skill is added to the group wanting it, nothing can be done about it.
    LetterJ
    Writing Geek/Pixel Pusher
    jwynia@earthlink.net
    http://home.earthlink.net/~jwynia
  • *nod* Yeah, I probably should be more specific when speaking about optimization. AFAIK all the distros which do "pentium" optimization include a healthy amount of -06 type optimization as well, which may well be the source of good chunk of the benefit.
    Perhaps you can answer something else for me, the scheduling optimizations aside, I was under the impression that a portion of "pentium" optimization was the utilization of instructions not present on 386 and 486 cpus but are present on the non-intel pentium class chips. Is that true?
  • ...What is your function?

    :)

  • If they don't make a quality product that is easy to use, why would people buy their product instead of Joebob's Easy Quality Linux, which is a quality, easy to use Linux? And if people don't buy their product, then why would they go to them for support, instead of Linuxcare or IBM? Even if they do intend to make most of their money from support, the distribution is what gets them the customers.
    If they had a monopoly, they might be able to get away with poor products, but when there is competition, you need to compete, or you lose.
  • .

    > Grow up. I believe there is a fameous quote that to have free speech we must also tolerate the speech we hate for us to truly be free.

    to which, you (hattig) replied:

    > Things like this make me happy not to live in America.

    Which is ironic, as I believe the quote he was referring to was: "I disagree with what you say, but I will fight to the death for your right to say it", often attributed to Voltare, a French philosopher. (Of course, it could be said that a great deal of American concepts of freedom come from the French and visa-versa).

    As to if this quote is correctly attributed, I have some question (I've seen it in print and on a billboard, but some academic types have recently disputed it). But the common attribution is to a Frenchman.

    Are you now happy that you don't live in France?

    --
    Evan (Sick of "geeks" who complain about the term "hacker" in the media slamming someone for their "nationality").

  • They've got Debian Linux, HURD, and FreeBSD, isn't anyone else a little worried that they've taken on too much? In the end, it will probally delay each dist a lot longer then the long awaited 2.2
  • The major reason Linux specifically and free source software in general is more robust and useful than proprietary software is because the authors WANTED to write it ... Quality will suffer. Innovation will suffer.


    Possibly - but I think that once the "hard work" is done with issuing the first release (with code of course, since we're talking about Red Hat) then that puts a lot of momentum behind the project. So yes, maybe you'll have "disgruntled Red Hat worker in her cube" (if there is such a thing...) releasing the first version, but then it's out there for all the people who didn't have time to do it from scratch, and the beauty of open source takes over...


    Unless the initial design was fundamentally flawed, I think great software can still arise from a corporate-sponsored beginning.
    ----

  • Debian itself takes security pretty seriously. Often it is Debian releasing a security related bug fix long before other big distributions. There was a recent bug fix which RH trumpeted but which had been fixed by Debian more than a year previously.

    The only time I have had one of my Debian machines broken into is when I had failed to head the Debian Security Advisories.

  • "The reason for this is that, technically, KDE is illegal."

    Nonsense! This is just Redhat propaganda. If it is illegal, what grounds would you give a judge to issue an injunction on

    The -old- Qt license gives explicit and upfront permission to link Qt to GPL apps. So KDE is in full accord with the old Qt license. KDE is KDE is KDE (a is a), thus KDE is in full accord with its own license. So what's the problem?

    Debian's problem with this is that they have too many chiefs that they have to please.
  • by HomerJ ( 11142 ) on Friday December 03, 1999 @10:37AM (#1483195)
    In the earlier article my question was basicly "With Slink being out of date, and Potato having a flaky install, how does one do a fesh install of Debian now, and have all the recient packages (Xfree 3.3.5, 2.2.13, etc.)

    Here's what I did, and thanks to the reply to my post that was helpful:

    1) Installed Slink, with all the packages I wanted. (installed flawlessly, as it should)

    2) Compiled my own kernel. I tried to do skip this step and alot of the Potato packages chocked because I wasn't running a 2.2.x kernel. SO I had to start over.

    3) changed /etc/apt/sources.list to unstable

    4) apt-get update udates your package database to the latest version that are in unstable.

    5) apt-get install dist-upgrade upgraded all my packages that I installed with Slink. Updated packages included XFree86 3.3.5, Enlightenment DR16.3, Latest GNOME, etc. It failed to to update a couple packages, but they weren't needed and I just apt-get remove

    6) Went though the set-up process. Uncluding XF86Setup, untar'ing my old home directory, etc.

    That's about it. I have a full Debian distro with later stuff then I even had with RedHat 6.1 Apt-get served me so far when I didn't have libgnome-dev(compiled a couple gnome aps). Unstead of looking for RPMS like I did with RH, I just typed apt-get install libgnome-dev and waited a few minutes and had then already installed and read to go.

    Been running fine for 4 days now, and it's great.

    I encurage anyone thinking about Debian to give it a try. My only gripe is that I didn't install it months ago =)
  • The status of the -old- Qt lib does not put KDE in the non-free directory, only in the contrib directory. Bone up on your debian policies.

    The new Qt lib is fully DFSG compliant. GPL compliance has nothing to do with it, and you can find hundreds of non-GPL stuff in dist.

    Debian thinks that there is a license conflict, but this conflict is is free versus free, so nothing belongs in non-free. Perhaps they need a new directory called "politically incorrect".

    But why should Qt be made GPL compliant? After all, it is the GPL that is incompatible with the QPL, and not the other way around.
  • You mean like the %pre, %post, %preun and %postun scripts in RPM?

    You're ignoring the bit where Wichert says "special-case upgrades, handle error-recovery in failed and aborted upgrades, etc." Debian postinst scripts can each be called in multiple ways, with informative parameters. So for example, if all files in a package are replaced by files from other packages and the package "disappears", its postrm is run with parameters that let it know what's going on. Or if you install a package that is replacing a package it conflicts with, and the new package's postinst fails, the conflicting package's postinst gets a chance to deal with this situation. There are all kinds of complex situations like this, enumerated here [kitenet.net]. I'm not aware of RPM scripts having access to such detailed information.


    --
  • I'd have to disagree with you here... it's definately the architecture specific optimizations. I also have a Celeron 450, so I decided to perform a test: xmms compiled with -O2 uses more cpu than xmms compiled with -O2 -march=pentiumpro, which does more than optimize: it uses PentiumPro specific instructions, and *will not* run on anything else. Compiled that way, it never shows up as using any cpu cycles at all, which is obviously ridiculous, but I guess it's just falling beneath the tenth of a percent threshold.


    Supreme Lord High Commander of the Interstellar Task Force for the Eradication of Stupidity
  • 21 now, and back down to 2 from 3. jeez. I wish I hadn't blown all my points this morning- I almost never ever mark anything as funny, but this deserves it big time.
    ~luge
  • As you note, the GPL is incompatible with the QPL (though the KDE developers can add an exception to the GPL to allow linking against QT). It isn't a problem with QT 2, which satisfies the DFSG. The problem is that it is not legal for Debian to distributed KDE without a license change.
  • I didnt sit around doing benchmarks all day, I did notice a very large difference in the performance of KDE Even if pentium optimization only helps out with things like KDE I'd say it's worth it.

    Same here! I don't have dual PII. I used to have Cyrix 200 and 2 weeks ago upgraded to AMD K6/2-300. (So much for slowing down non-Intel CPUs). (Oh, I'm running Mandrake, not Stampede). I noticed a significant improvement of KDE performace in Mandrake compared to SuSE or RedHat.

    Secondly, you can't really run KDE on a 486 anyway (even KDE developers admit it). Pentium optimizations for just KDE (or just GUI for that matter) would help.

    Finally, from reading comp.os.linux.mandrake, I heard that Mandrake actually does run on a 486, despite what some people claim.

    That is not to say that I want 486-optimized distributions to die. I am happily running Debian 2.1 on my 486 which I use as a IP masq gateway and a ftp / samba server.

  • Just one more point to add:

    Compiler scheduling can make a difference on an out-of-order machine. After all, the instructions are still fetched and decoded in-order. You compiler can schedule for decoder hazards and latency. This might be useful on a P6-style machine, where the decoding of different types of instructions use different resources (there are some limited number of simple, complex and microcode decoder engines).

    The compiler can look much further down the code than the limited instruction window in an O-O-O processor. So even traditional scheduling can help on O-O-O machine because the compiler can grab code from "far away."

    --

  • This problem could tie in very nicely to the new package system. There could be two versions of every kind of distro: A pentium version and an amd/x86 version. I, for one, installed Debian on my 386 w/ 4 megs (Dad won't let me partition the main computer :-( ), and I would be very disappointed if Debian made their distribution Pentium-based.

    OK, maybe I'm a special case, but lots of other companies use old 486's for routers etc while using Pentium workstations. They would also be very happy with the new package pool + Pentium/non-pentium distros.
  • Comparing MP3 encoding speed using BladeEnc, my Celeron 450 beats my friend's dual Celeron 466 by about 25% in encoding speed.

    Important point - dual or not dual is irrelevant in this test. Bladeenc is a single process and will use only one CPU. In order to use both CPUs you'd need to run several instances of bladeenc simultaneously (i.e. encode at least 2 files at a time).

  • by Anonymous Coward
    The thing that gets me, is that they say: "This only helps those on Pentium boxes, and wont be of any use to those on non-Pentium boxes. However, you might want to get an Alpha and try the Alpha port." This is like saying: "The Alpha port only helps those running Alphas, and doesn't do any good for x86 users. Therefore, since it won't help out x86 users, were not gonna do an Alpha port." I'm not advocating that the x86 port be turned into a Pentium optimized port, instead, keep the generic x86 port, but have a Pentium optimized (along with a K7 optimized, etc.) port and treate them the same as a Sparc, Alpha, or PPC port.
  • It grows faster than anything...

    Ah, but that's Ackermann's function. Sorry.

  • "The problem is that it is not legal for Debian to distributed KDE without a license change."

    It's perfectly legal. If you don't believe me, just ask any KDE developer whether you have permission to redistribute their work. Everyone of them will say yes.

    That there is a possible confusion over whether the GPL allows GPL applications to be redistributed is mind boggling to me.

    Taking a pre-existing GPL application and modifying it to run with a QPL library could possibly be against the license, but that is not the case with KDE.
  • The problem with the GPL and a lot of different open source licenses and not open source licenses is this... they're too complicated, and restrictive... the opposite of what 'free' and 'open' mean.

    The optimal OSS license would be this:
    One may do whatever they wish with this source code and may take credit of only those changes which they make, if any. The source code of the original unmodified software must remain available and the modifications must acknowledge their being based on the original software, however, the changes made to it may be put under any license which the developer wishes.


    whatcha think?

    personally, when i write software, i say in my license... this software is now YOURs, it is your property, you may do what you wish with it - change it, copy it, sell it.
  • Well, to run all the unstable stuff (which include the newest versions of E, GNOME, etc.) you need to have libc 2.1 installed... which is a serious download, and requires upgrading gcc and a few other packages. But I think the connotation of this post is that you have to make a choice that your whole system depends on (bet. 2.0 and 2.1), while in my experience the vast majority of the packages will be just fine after upgrading to 2.1.

    In other words, the upgrade to 2.1 seems to me more of a dependancy issue (such as e.g. needing gtk+ and imlib and so on to install E) than a "you can't do this on this system" issue.

    Jose M. Weeks
  • The pentium did add instructions to the x86 set that earlier processors can not execute. PPro added a few more. But the first real performance-enhancing instructions, as far as I understand, were the MMX instructions in later Pentiums and up.

    I'd list a few of the added instructions for you if I had the documentation here, but I don't have it in front of me.

    The vast majority of (and this is from an assembly-language perspective, so I don't know how well it comes into play for optimizing compilers) optimizations on the pentium rely mostly on using the second arithmetic core as much as possible (the first core was always used, the second only in special situations, so ordering instructions to use both as much as possible could theoretically cut execution time in half), careful use of the cache, and avoiding branching (which was a good optimization strategy for any x86 processor).

    The point here is that to optimize, the best strategy is basically to eliminate big, uncommon (this would include the new) instructions (stick to ADD, SUB, shifts, and string instructions when possible). Using such a strategy will likely do little for 486- processors either in faster or slower execution, but would allow for execution on such machines.

    Jose M. Weeks
  • Taking a pre-existing GPL application and modifying it to run with a QPL library could possibly be against the license, but that is not the case with KDE.
    By jove, I think he's got it.

    That's exactly the issue with KDE. Like any good open source project, KDE reused code from other projects. That code was GPLed. The original authors specifically chose to use the GPL, and that specifically excludes Qt. (though perhaps the system library clause of the GPL allows Qt, but that's a different issue)

    Now, to make KDE legal they have to relicense their programs where they used the GPL (there is a push for Artistic licensing in KDE apps) or add an exception to the GPL. To add an exception to someone else's licensing requires the person's permission. To become legal, KDE has to get the permission of a lot of people. It's not an insignificant issue.

  • > uh-oh, the `pile anything on top of each other
    > approach' to package management is coming to the
    > BSD world.

    I don't know what you mean, but you probably haven't tried Debian. Package management is very fine grained, when you install a package a menu entry gets added to every installed window manager for example.

    > Its a shame the Debian effort doesn't look
    > very ambitious, but I guess its a voluntary
    > effort.

    Are you trolling or just misinformed?

  • I agree with some of what you say, however, I was speaking of a general commercial vendor Open Source license. The intent of my posting was to elicit work on a licence that could fit a variety of situations without adding confusion.My intent was to make the vendor the maintainer of the code (Duh) and to retain ownership, that could later be placed under the GPL as the product ages and has paid for its development costs and profit, but sales have dropped. Maybe a statement that the purchaser or user could reuse the code for personal use only or improve upon it for specific application (i.e. clustering kernel adaptation). But I would like to see bug reports and fixes submitted to the company for approval for inclusion in next distribution. This way the they could adjust the fix to adhere to internal coding standards. Some stuff people write is too confusing to maintain. If the fix doesn't make the next distro, then it would go to the website or ftp.The purchaser would have to keep ownership of the original source of the product (i.e. it can't be taken away), since for special applications or uses it is necessary to keep a copy on file. I can give specific examples if this is not clear.I'm not entirely sure how derivitive works should be handled, but that's why the pros need to meet. Derivitive ideas should be allowed as you can't control thought and we shouldn't stifle innovation. That would be treated different than reusing the actual code snippets. Maybe say that parts of the code may be reused, but not in a product competing in the same area as the exact same application unless it is to maintain standards with LSB, W3C, ISO, ANSI, etc.I don't claim to have all the answers, but I wonder just how far apart the existing commercial open source licenses are.

    Free the Souce, the rest will follow.
  • Finally, from reading comp.os.linux.mandrake, I heard that Mandrake actually does run on a 486, despite what some people claim.
    For the record, Mandrake will run on an ALi m1489 motherboard with an AMD486DX4-100 and 32MB of EDO RAM... I couldn't really tell you if I took a performance hit because KDE is slow on a 486. (Word to the wise, BlackBox is best on a 486), but it did seem to run fine.

    Completely unrelated, but the Mandrake I installed defaulted to XDM/KDM which doesn't make me happy...

    Somewhat related, my box is quite snappy for a 486 and KDE runs acceptably slow (no slower than Win95 OSR2) so maybe I am an exception to the rule.
  • open sourced my ass. try recording and releasing your own version of John Lennon's "Imagine" without getting permission and see how fast the lawyers come knocking.


    one down, three to go....


  • I posted a question about this on the debian devel newsgroup and no-one seemed interested but the best way to get something done is to do it yourself. Get in there and hack some code!
  • Because this isn't FreeBSD, it's *Debian* BSD. The Debian system is based on glibc, and using a single library across all Debian systems would theoretically make it a lot easier to debug problems and so on.
    Also, you'd get some level of binary compatibility if all programs were ELF and dynamically linked against libc -- only direct syscalls would have to be fixed up, and I don't think there are many of those..I've heard that even syscalls (from C) can go through libc, so mainly you'd need to worry about assembly code.

    Daniel
  • by craw ( 6958 ) on Friday December 03, 1999 @02:21PM (#1483238) Homepage
    The pragmatic purpose of moderation is to allow a reader to filter out useless garbage. As you point out, this was funny. I think that it is absolutely hilarious. I believe that most ppl would also find it amusing. Hence, giving it negative points will deprive others from this very unique and funny post. The other nice thing is that it indicates that Wiechert obviously knows this site. That in itself is Insightful and Informative.

    Off-topic? Nah, the interviewee just poked fun at himself. When someone is interviewed, you want to get a sense of his attitude. This showed me that he has a sense of humor.

    To me this wasn't elitism in action. It's a great joke; personal self-depreciation. I think that it elitism for the moderators to downgrade this.

    What if a Linus interview was posted, or one with RMS? What if they then submitted, "First Post"? That would be ROFL! It would also indicate that they are reading /.

    I understand what you are saying about being non-discriminatory, but you sometimes you need to be flexible.

  • by jajuka ( 75616 ) on Friday December 03, 1999 @02:35PM (#1483244)
    AC wrote:
    The thing that gets me, is that they say: "This only helps those on Pentium boxes, and wont be of any use to those on non-Pentium boxes. However, you might want to get an Alpha and try the Alpha port." This is like saying: "The Alpha port only helps those running Alphas, and doesn't do any good for x86 users. Therefore, since it won't help out x86 users, were not gonna do an Alpha port." I'm not advocating that the x86 port be turned into a Pentium optimized port, instead, keep the generic x86 port, but have a Pentium optimized (along with a K7 optimized, etc.) port and treate them the same as a Sparc, Alpha, or PPC port.

    Someone should have moderated this guy up instead of wasting points bouncing Wichert's "first post" post up and down.

    That would be extremely cool, and considerably easier than an actual "port". There is the issue of available disk space for such a project, I dont know if that's easily available or not.

    If they choose not to do something like that an "apt-get build -mk7 -O6 mypackage" type command would be a good substitute for those with bandwidth to spare.
  • "If that's true, they can just change the license to a valid one, right?"

    Even though KDE was written from scratch, there was more than one developer. You still need to get "The KDE Team" to agree.


  • Moderation Totals:Offtopic=6, Flamebait=2, Troll=2, Funny=15, Overrated=2, Total=27

    That's quite a power struggle. Wichert's karma has been having a bumpy little ride, indeed.

    For the record, *I* think it's damn funny.
  • "Qt can link with the GPL, but the GPL explicitely states that you can't link in non-operating system
    libraries. The reason for this was that it was
    trying to close a loophole in the GPL."

    This clause does not refer to the licensor, KDE. And it doesn't say what you think it says. The paragraph in question is talking about source code. It is merely requiring that the source code for all modules be included in the distribution. Whether or not Qt is a module of KDE is beside the point, the Qt source code is already available from the same places you get KDE.

    Simply, it says if you distribute KDE then you should also distribute Qt.
  • by kinkie ( 15482 ) on Friday December 03, 1999 @03:21PM (#1483257) Homepage
    The point about the install-script has already been made in another post.

    I'll object to the others:
    "For example, rpm allows you to have multiple versions of a single package installed."
    That's not entirely true: RPM will allow that by default (provided that there's no file conflict), and this can sometimes be useful if you wish to have multiple versions of a library on your system for backwards compatibility purposes.

    However, the packager is given the choice to state a "conflicts" clause, which supports version information. So to emulate dpkg, packagers should simply assert that some version conflicts with all earlier versions, and the trick is done.

    Now about the other one:
    "One problem with rpm-packages is that it is harder to satisfy dependency due to the concept of file-dependencies. Instead of being able to say `I need package b to be able to use package a' it can become `I need file /usr/lib/libfoo.so.3.14 to be able to use package a' and then you'll have to find some way to scan all packages to see which ones include that file, and then you're not even sure if they have the right version of that file.. "

    Again, this is not entirely accurate: RPM allows a packager to specify a "Virtual capability": Simply put, a package foo can assert to provide a capability bar, and other packagers can express dependencies to the bar capability, rather than the libfoo file.

    I'm not saying this to bash dpkg in favor of RPM, but just to set some facts straight.
    For instance, I like dpkg's "recommended" dependencies (as opposed to RPM's only "hard" dependencies).
    I think the two technologies are roughly equivalent, and that Wichert is oh-so-right in saying that in the end the choice boils down to personal tastes, familiarity and stability.
  • Right now we are working on getting everything ready for the potato release (the state of the
    boot-floppies was one of the major reasons for postponing the freeze). Once that has been finished and potato is released we will probably take a good look at our current design and see how we can improve it. A GUI installation is definately on the TODO-list, as are things like hardware detection and automated installs. We also have access to the installation systems from Corel and Stormix, which should make things easier for us.
  • I didn't notice that you mentioned it. Thanks!
    I haven't announced it in any official way yet, simply because I want to get a reasonable amount of functionality (ie, downloading and installing packages) first -- I don't want to announce total vapor on the mailing lists; I feel comfortable doing it on Slashdot, I guess. I'm not sure what that says about me.. :-\

    It has been possible to put a package on hold for ages though, perhaps you missed something?
    Can you actually put a package on hold from /libapt/? I've been thinking that I'll have to write my own routines to handle the dpkg status file if I want to do that.

    Thanks,
    Daniel
  • Which is as it should be. If a user want to keep logs, she can use shell redirection.
    No, you misunderstand totally (I think Wichert did too, I may not have expressed myself clearly). I don't mean keeping track of what *was* installed, but rather keeping track of what's *going to be* installed. That is: I decide to install package foo, so I tell the frontend to mark foo for installation. The frontend does, and happily marks bar, baz, and frobozz as well. I have no direct way of finding this out via libapt; moreoever, the *frontend* doesn't know. The latter problem prevents the implementation of an undo command.

    There's a workaround for this, but I personally think it's ugly. This isn't really the worst problem, though, and libapt is arguably correct in its behavior here.

    -> allowing in-progress downloads to be manipulated on a fine-grained level (ie, cancelling individual jobs)
    Again, this is exactly the kind of thing which should be handled by a front-end. In any case, a user can always hit Ctrl-C during a download, edit the command line, and apt-get will recover quite nicely.
    Perhaps you didn't read what I wrote? There is *no* *exposed* *way* in the libapt API (at least, none that I can find, and I've looked for a while) to terminate a single download process. That is, if the user starts a large download and then decides that everything's ok, but that one Netscape package shouldn't be installed, the *whole* download has to be stopped and restarted.

    The Acquire system is also not very well suited for background downloading (ie, in a separate thread).

    -> keeping persistent state across program runs: knowing, for example, that the user specifically asked for a particular program to be held back at a lower version than the newest available.

    You can put packages on hold using dselect (an annoying program, IMO, but functional), though admitedly I've had apt loudly proclaim it was going to over-ride my hold on a package, and then proceed to enthusiasticly upgrade it without even prompting me. I should probably submit a bug report on this, I suppose, but at least it acknowledged the hold was there.


    Yes, and my frontend automatically inherits dselect's selections when it starts (err..the CVS version does, got to upload the tree to Sourceforge one of these days..) but there is no way to modify the saved state from apt (unless I'm mistaken, Wichert said he thinks I am, so I may very well be), which makes it useless as a replacement for dselect at the moment.

    I expect that some of these will eventually be fixed, or that people will yell at me to be more creative in working around them ;), but my point was that libapt is very far from supplying 'everything you could want' to build a frontend.

    Daniel
  • I'll see whether that works -- but I believe that apt locks the dpkg status file when told to lock anything (with good reason!), so I doubt that the dpkg call will work. libdpkg might.

    Daniel
  • I don't think this would happen to the degree you are suggesting. It's true that OSS gets some of its power from programmers who work on something they want to. But the "thousand pairs of eyes" factor is a bigger deal, IMHO, and I think it would ameliorate anything but a horrid first attempt.

    You may have answered this already, but what projects are so heinous that no one wants to work on them? I can see nobody wanting to do it bad enough to start in the first place, but to not want to do it at all?

    For example, I'd never take a free Saturday to add FireWire support to the Linux kernel. My free time is too precious, and there are other projects I'd rather spend my time on. But if Red Hat approached me and said "We'll pay you 30k/yr to work on adding FireWire support to Linux." I'd get real interested in a hurry.

    There are a hundred projects I'd never start on my own which I'd still be interested in if it meant I was quitting my Real Job and had time to work on them.

    In addition, I think the "disgruntled Red Hat worker" (to borrow from Booker) is less likely to occur in a free software setting, because many of the Dilbert-esque factors that make programming for a living no fun are far less frequent in such environments:

    • far fewer clueless pointy-haired bosses
    • decisions are made by the ones who understand the technical aspects
    • seldom have to work around someone else's crappy code, since if it's crappy it's probably not adopted by any real distro
    • don't have to tech support for dumb users (clientele would largely be other developers, who don't ask things like "What do you mean, shift-click?")
    • ...and probably a dozen others which I can't think of since I don't program for a living

    Anyway, I love my job, but I think I could find any number of less-popular programming tasks very rewarding even if I wouldn't choose to spend my time on them now, limited as it is.

  • Jason, I apologize for the negative and critical tone of the earlier post. libapt is truly wonderful; I was just attempting to point out a few things that could be improved. I'm still not convinced about point #1, and I don't remember getting an answer to #3. I'll go check my mail archives.
    I actually queried the mailing list on #2, but I haven't gotten an answer. I think this is an important ability, mainly because it's required for a really nice (IMO) UI feature I have in mind :) I believe my message to debian-deity has more information. Could you explain at length why this is not a good idea, and what (if anything) could be done to make it a good idea?

    And finally, I'm sorry but I still don't agree on the documentation issue. I have several times run into stuff that the files in /usr/doc claim exists but has turned out to not be useful (I believe I actually asked you about InstallVersion and xstatus, and was told that they were experiments that weren't relevant anymore), and it's difficult to find out what any given function does in full.
    Usually I just end up tracing through the source code to find out; a brief collection of documentation that gives a more high-level overview of Workers, CacheFiles, Acquire queues, and all the other stuff would make it a lot easier to get up to speed; while you can guess in general what these do, the specific relationships between them are not always clear and the documentation in the headers is somewhat spotty. If you want, I could actually probably write up some general libapt documentation -- but I have to admit that I'm also guilty of "I'd rather code" :-\.
    How the system works can be pieced together from the source and headers, but that doesn't necessarily mean that the documentation can't be improved.

    In general: it's no secret that libapt, like any piece of software, has missing functionality and is constantly changing. But I think that it still has a little ways to go before it does 'anything you want'. It might never be there, since everyone (as I've just demonstrated) defines 'anything' differently. So maybe my complaining is totally pointless. Or something... :-/

    Thanks,
    Daniel

    I'm really starting to think that that post was a mistake. Chewed out by you and Wichert in the same day..it has to be some sort of record..
  • Isn't slashdot made in perl? so how could there be binarys?
  • Here's an idea; let SPI sell Debian CD's and manuals and use the profits from that sale to hire people to write a new dselect (and other things that are needed in Debian) -- just like the FSF develops some of their software.
    There's a common misconception in the world today that free software "simply happens" because someone needs a tool. There's some truth to that of course, but things like the GNU libc, GNU CC and other projects where started and paid for by the FSF because they saw a need for that software.
  • I'll believe that the day Debian officially apologizes for removing kdelibs

    The copy of kdelibs that was removed claimed (in it's copyright file) that it included some code that was GPLed. IIRC it turned out that the same code was also available under the LGPL, so this was actually a bug in the copyright file, but given that the copyright file did say that at the time, it was a reasonable thing to do. You'll find that Coolo has recently uploaded a new kdelibs package, which is presumably just delayed in Incoming for the normal admin-bottleneck reasons that means that ``new'' packages often sit in Incoming for a few weeks. I'm not sure what there is to apologise about.

    Anyway, how does the allegation that we were precipitate in our actions towards kdelibs imply that we are without principle?

    I really cannot comment about RedHat's position on this, but it does strike me (as an outsider) as being somewhat inconsistent.

    The situation with the commercial distributions is different to Debian. If the owners of one of these companies decides that the chances of a legal case being successfully brought in relation to this is so small that it can be ignored, then they are only betting their own money (and I think that they are on a fairly safe bet, so that's fine).

    If one of the Debian developers makes the same decision, they are betting the money of all the owners of the Debian mirror sites, and all the CD Vendors. Debian doesn't exist as a legal entity, so if anyone gets sued it will be us as individuals. We're measuring the risk using a different standard, which probably explains the different decision.

    Of course, there is also the feeling that the KDE developers have sinned against the GPL, by misapplying it as they have, which probably explains why Debian developers are not being more pragmatic about this. This is a largely emotional response, which is another reason why you'll not persuade the Debian developers to change their minds.
  • I think dselect is very efficient and isn't as difficult as its made out to be as long as you read the documentation and familiarize yourself with the keystrokes, which are straightforward. Plus, the functionality is there (or am I missing something?), and I don't find it very annoying. Surely, alternatives are good, but there are hostilities towards it and, for some reason, blamed for the difficulty of installing Debian.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...