Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×
Red Hat Software Businesses

Is It Time To Change RPM? 264

Floris pointed us to an excellent article over at Freshmeat discussing the problems with RPM. It compares RPM to the other alternatives (mainly Debian's apt system) and discusses where the problems are. It's not a distribution war thing, this is a serious problem that needs discussing. Read the story and chime in.
This discussion has been archived. No new comments can be posted.

Is it time to change RPM?

Comments Filter:
  • the "do-everything" button was an abstraction. you should have realized that.

    i've read many of your comments, and agree with quite a few of them, but your views on what a package management system should be able to do for the user are off the wall for reasons already expressed by myself and others.


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • The original poster seemed to be implying that we'd run into compatibility problems by making additions to RPM.

    Not at all. What I'm saying is that the pre-existing packages are no advantage, because you'll have to make new ones to support the exapanded functionality. Which means that the primary motivation given for trying to make .rpm do these things instead of simply using .debs for their distribution is specious. Why re-invent the wheel?

  • the problem is stagnation. rpm hasn't been moving. I think we could all agree that rpm's have fallen behind. I don't think we should compare deb to rpm's at all. it doesn't really matter. what matters is that people don't get involved. looking at the changelog file, it looks like the only person doing anything is jbj@redhat.com, and it looks more and more like maintenance mode.

    I think a lot of these features would be great. that is the great part about open source software. if people want some of these features. people can always write them and contribute.

  • by Metrol (147060) on Sunday September 17, 2000 @04:22PM (#773095) Homepage
    Firstly, it's not easy to update the ports tree itself

    You're going about it the wrong way there. Have a look see at the FreeBSD Handbook for CVSup [freebsd.org] for more details on this. Also, if you don't already have a copy, go pick up "The Complete FreeBSD" [fatbrain.com] from Walnut Creek. It's an outstanding book, and one that I found to be a much easier read than many of the Linux specific books out there. It has a chapter covering the ports tree that I think you'd benefit from.

    Mind you, ports do have their problems. All in all, I think that it's a far better approach to software distribution than anything else that I've seen. A more in depth discussion of this, which even relates to this thread, was done a few weeks back right here on Slashdot, "Unified BSD packaging system?" [slashdot.org]. One of the concepts brought up in that discussion I haven't seen mentioned in this one is if there is some way to unify all the *nix world rather than just the various flavors of Linux.

    It sure would be cool to see a group work out the best of the best features from the leading methods of distributing software and bring it to all the platforms. Definitely not holding my breath for anyone to actually do this, just a pleasent thought just the same.
  • It is infinitely configurable. But Linux, along with the other unices, is made to be a server - and servers are made to be used by users, not administrators (they have their own user accounts). The users are definitely used to the common directory structure. Sure, you can do whatever you want your own box, but your users might not know what the fsck is going on.
  • > One famous saying that applies to Linux is this: too many cooks spoil the broth. With the thousands of driver modules, library modules, and executables, Linux is one big mob of code.

    What you're saying makes no sense. Linux is not one big mob of code, it's the exact opposite. The code is neatly divided into pieces, each piece serving a different purpose. This way each team can work on a specific piece of software, with minimum compatibility problems.

    The difference between Linux and Windows in that aspect, is that while Linux packages different programs/libraries/whatever in different packages, Windows "is one big mob of code", with *everything* distributed as one huge package.

    > It's surprising that it's as stable as it is;

    In that case, Linux vs. Windows is just an exception to the rule, right? I suppose then you can provide other examples then ... ?

    > it's not surprising that it's as hard to upgrade as it is.

    Obviously you're not using the right distribution and/or the right tools. I can upgrade my entire distribution to the latest cutting edge software with `apt-get update; apt-get -y dist-upgrade`, then come back later when the download is complete, and take 5 minutes to configure it.
  • > Microsoft doesn't dole out the DirectX SDK out freely (or do they?

    Yes, they do: DirectX 7 SDK [microsoft.com]

    If game developers had to pay for it, it would of seriouslly put a dent on DirectX becoming the standard Win32 hardware acceleration API.
  • by baywulf (214371) on Sunday September 17, 2000 @03:11PM (#773099)
    In my opinion, the problem with RPM is that the documentation is falling behind. The book 'Maximum RPM' is of very good quality but it hasn't been updated since RPM 2.0 days really and so many things have changed or features have been added. For example, the relocatable packaging feature has been greatly improved. Macros and triggers have been added. The latest RPM can do automatic gzip or documentation, strip binaries and do additional checks. All this leads to RPMS of varying quality because they come from various sources, each with differing opinions and standards. If you want to see how bad it is, just use the program 'rpmlint' or a typical RPM package and see all the warnings it gives you.
  • Sorry, I don't really use perl. My bad . . .
  • I hate to be involved in the Debian vs RPM based distro holy war, but Debian's packaging system is one of the things I think makes it so much better.
  • hmmm... interesting theory here. I don't believe that rpm's were made for this intended purpose, but you do have a point. Debian upgrades are a breeze compared to rpm distro's. Once Debian is installed there is nothing but upgrade from there. Just point apt-get to the right place and there you go. Redhat might not like it if people only bought a distro once and then just apt-get distupgdraded it from there.

    Debian, being completely non-commercial does not have this concern. I don't use Debian as my desktop because it's not as cutting edge as I like, however, if you run it as a server it minimizes administration hassles and it makes maintaining your box much easier.
  • hehehe sorry i was too concerned about his last name to check the whole post.

    my fault
    -rev

  • Ports is indeed a very cool system, but there are one or two wrinkles.

    Firstly, it's not easy to update the ports tree itself; either I'm dumb or the documentation doesn't tell you how to do it. I'd like to be able to do something like this:

    $ cd /usr/ports
    $ make update

    And magically, version numbers update and new packages appear.

    Secondly, ports doesn't seem to support any kind of history on the packages; you get the latest version that your ports collection knows about and that's it.

    Thirdly, there seems to be no way of verifying the installation of a particular package; like rpm --verify.

    I'm sure that these things will get straightened out in time (or not if they're dumb ideas).
    --
  • That's an excellent idea! ok, but now it's time to rewrite anything that depends on certain things being in certain places. number one: the linux kernel expects init to be in one of several places which include /bin/init, /sbin/init and the like - but if it's not there it will panic (but only if it can't find a shell to dump you into first, which is likely if /bin is not /bin). However, if you just want to change directory names, use symlinks - you don't need a config file for that.
  • It really install to /usr/local/bin? Then the distributor of Quake2 is violating FHS. The whole packaging system is not supposed to touch the /usr/local directory!
  • The problem with that is, after a year or two of installing applications from tarballs you have shit strewn all over your box, incompatible library versions, and generally no version control system. It's a mess. At home on a personal system that you don't care about it is fine.. you can always wipe the system and reinstall every year or so, but on a production system you need more control over your packaging.
  • It would be nice to see both apt and RPM adopt a rich XML-based standard for expressing prompts, defaults and so forth for interactive installers, along with a way to express what prompts can be silenced and with what effect, so that text, widget-independent GUI, and web-based (among others) interfaces to interactive installation can be built without breaking anything.

    Oh, you mean, like debconf [debian.org]? Well, IIRC it's not XML, and there's still a fair number of debs that don't use it, but the functionality is mostly there.

    It provides an interface for front ends to ask you config stuff for packages, and stores your answers in a database-- it won't ask you the same questions again, unless the packager makes it do so for some important reason. It's configurable in the level of questions it will ask-- ask only critical stuff, and go for defaults on the rest, ask everything, and a few points in between...

  • by Metrol (147060) on Sunday September 17, 2000 @04:43PM (#773109) Homepage
    Where you do bring up some good points here, the basic concept of what you're talking about just isn't desirable. Let's work through your example, which is a fair one to be sure.

    On the destkop, a user who, on Christmas morning, getting messages that Barbie Magic Funhouse can't be installed because it conflicts with sendmail and will break dependencies for Evolution is complete, utter and total, unadulterated failure.

    It seems that you are referring to a Windows based install here, or at least that's the premise I'm working from. It's quite likely that this Barbie program is going to require DirectX of some version level, as well as other possible shared libraries from Windows. What does the software OEM do in this case? They include on the CD any of the possible shared library versions that may not be up to date, then the installer looks to see if it needs to upgrade the system or not.

    Certainly the weakest point of RPM's is that they are simply awful at dealing with library dependancies. That doesn't mean that this problem can't be resolved, it just means there is a problem. The BSD folks recognized this problem a while back, and address it quite nicely with their package and ports system of installation.

    Coming back to this example, in the FreeBSD world if a library equivalent to DirectX were to be needed by this Barbie program, the package would go and hunt down that dependency on the Internet. It's pretty powerful stuff, and goes a long ways to working around the problems that you've brought up.

    To date, I only have experience with RPM's and FreeBSD's system of handling installations. I understand that Debian's package management has similar dependency finding capabilities. It's probably fair to say that none of the present solutions now being used are optimal. I would include even Windows installs as problematic in this regard, and anyone who has run into "Can't find VB400RUN.dll" before can certainly appreciate this.

    I am of the belief that a truly optimal system can be developed, just based on what I've seen to date. Where I have hope here, I also have a great deal of doubt that the various groups backing their preferred method can step out of their foxholes long enough to work towards that.

    Bottom line, there's more going on out here than just RPM's not finding dependancies. Have yourself a look about, there are some very cool things that are already in place, and in work.
  • That's what symlinks and shell aliases are for. And bash's default config file aliases ls to dir for all you DOS people, so you don't have to think too hard ;)
  • > security.debian.org does make it possible to install only security fixes on a stable system. That's an important but very limited case. It doesn't help if I replace security with any other criterion.

    Can you give example of another criterion?

    > Moreover, what if I'm running unstable? I often do this, but there are still times when I don't want to risk installing any upgrades but critical ones.

    Why are you running unstable, if you want to only risk installing critical updates?

    > hold is a very blunt instrument. For example, if I install version 1 and put the package on hold, I will get alerted by deselect when version 2 comes out (ie, it will appear in the "Updated" section"), but if I choose not to upgrade, I won't get any new indication when version 3 comes out. I have to remember that version 2 was the last version I considered.

    I could be wrong, but I think `apt-get dist-upgrade` will tell you when a potential upgrade is being put on hold.

    > Debian stable does not contain only well-integrated, well-tested packages. If you think so, you're either horribly biased or smoking something. Think about GNOME in slink. Or the many orphaned packages in any stable release. Or all the random little programs used by almost nobody and packaged by novice Debian developers.

    *shrug*. I don't use stable, so I wouldn't know. Unstable is plenty stable enough for me. Perhaps someone else can comment ... ?
  • at some site called slashdot...
    http://slashdot.org/article.pl?sid=00/09/13/0634 210&mode=flat

    funny how that article got one (abusive) comment and this one got hundreds, they're essentially about the same thing.

    Anyway for what its worth - I think all the people thinking about this shit should talk to each other more. Theres not just rpm, deb, and that bsd development, but things happening on the fringes like the rpm-for-cygwin development (http://cygwin-rpm.sourceforge.net/ yay, no more searching through the list of ported software) and loki games' setup tool (http://www.lokigames.com/development/setup.php3)
  • by mosch (204) on Sunday September 17, 2000 @03:20PM (#773113) Homepage

    ask why a new version of a package was released?
    see a list of changes between old and new versions?

    Well, RPM does include a Changelog which should include why the package was released, and what changes were made. check the --changelog option.

    tell the system to apply only security or high-priority fixes?

    You can do this mostly by installing a distro, and then tracking a particular version of it. Redhat-6.2 has lots of updates, but all of them fall into your 'security/high-priority' category.

    tell the system to automatically process all updates except those involving specified packages, which I want to approve on a case-by-case basis?

    It's trivial to setup something like this where you mirror the appropriate dir on updates.redhat.com, then have a script which does an rpm -F foo.rpm on every rpm whose name isn't listed in 'no-auto-upgrade.txt'. However, given your original statement, it's not possible. You're saying that you want it to automatically everything, except it should psychically know what you want to pick and choose from. Ummm... no matter how you cut it, you'll at some point have to tell the system 'upgrade or no'.

    tell the system never to upgrade packages that require upgrades of packages used by other software (eg, libraries)?

    This is the default behavior of RPM. You have to use --nodeps to override it.

    ask for packages that will help me convert GIF files to PNG?

    You want natural language capability search built into your package manager? You've watched too much star trek. If however you did a quick search for RPMs that contained both 'GIF' and 'PNG' in their name on a site like rpmfind.net [rpmfind.net] you'd find gif2png [rpmfind.net] is readily available.

    ask for only packages targeted at beginners?

    I have no idea what use this is. Beginner is a very broad term. Is Enlightenment aimed at beginners? How about Windowmaker? The answer for both is a resounding maybe!, depending on the configuration. How about gcc, is that for beginners? After all, most computer barely-literates don't know how to use a compiler. And bind, that's definitely an advanced package right? unless of course you install a caching-nameserver rpm that helps the beginner have their own caching nameserver, then it's beginner. Or an obvious beginner package like grip, whcih isn't beginner at all, i mean, you have to know about mp3 encoders and cd rippers.

    ask for only well-integrated, well-tested packages?

    Use RedHat, they'll only give you these. If you stick to basics, unless you use Mandrake, you usually won't get anything that's not well-tested and integrated.

    get reviews of a package?

    Ah yes, all programs expand until they read mail. Or in your case, you're asking for the package manager to read newsgroups and mailing lists, so it'd be a newsreader too. Maybe we should just integrate this package manager of yours into emacs.

    find out how to get started using a package?

    The RPM format allows for certain files to be flagged as documentation and generally installs them in the path /usr/doc/$rpm_name. and man files in /usr/man. you can get a list of what it installed by doing rpm -qi package_name.

    begin browsing the documentation for a package before approving a full installation?

    again, you're asking the package manager to do things that just don't make sense. Why not read up on the software, then install it? Or just install it, and if it's no use to you, do an 'rpm -e'.

    have some help in configuration updates?

    These are called man pages, and documentation files. You read them and they help you. Or hell, if reading real documentation is too much work for you, then see if there's a HOWTO that you can peruse somewhere on the net.

    Personally I use FreeBSD which has it's own unique set of strengths and weaknesses, and if you don't think anybody out there is thinking about this stuff, you should read this document [freebsd.org] which is a summary of the state of these things in FreeBSD, and some ideas on how to progress.


    ----------------------------
  • by Dionysus (12737) on Sunday September 17, 2000 @03:20PM (#773114) Homepage
    I LOVE rpm! What could be easier than rpm -Uvh file.rpm?

    apt-get install package.

    Try downloading gnucash and install it. It will fail with a bunch of dependencies. Of course, it's close to impossible to find the where those programs/libraries are, and if you find them, you can't find the rpms for them.

    In Debian, 'apt-get install' will retrieve the dependencies too.
  • Say something useful.

    If it's compatibility you want, have a look at "alien".
    --------
    "I already have all the latest software."
  • apt-get/.deb/dselect are SOOOOO amazingly cool.

    I like to put it this way:

    RPM is what windows install/uninstall was meant to be.

    apt-get was what RPM was what meant to be.
  • That's unfortunate, because nobody cares about NoMad Linux.
  • Packaging (and shared code) inevitably causes more problems than it solves. Maybe a few geeks like us would appreciate the better use of resources that shared code gives. But most users (and probably a few geeks as well) would be better served making each program its own seperate entity--One program, one set of code. Some will no doubt decry my advocacy for static linking (I'll make an exception for things like glibc), but take into account that Linux is starting to get into the regular consumer market. The regular joe, home consumer market has a totally different set of rules and goals than the workplace IT market. And currently, I see the linux movement making a very large mistake by approaching the desktop market with the same exact strategies and objective as the server market. Success and failure in the two markets is very different. On the server, a crash is failure. Some script kiddie rooting the box is failure. On the destkop, a user who, on Christmas morning, getting messages that Barbie Magic Funhouse can't be installed because it conflicts with sendmail and will break dependencies for Evolution is complete, utter and total, unadulterated failure. And up to this point, the linux community has done nothing but bury its head in the sand and try to rely on the internet to solve problems that static linking could easily solve. The funny thing about the word "dependency"--it usually comes after the words "co" and "drug". Seriously, ask 1000 people outside a CompUSA what they would rather have: An OS that uses their memory and disk space more efficiently, or an OS that lets them install as much software as they want without breaking anything and which will never preclude the installation of any other software. 999 of them will go with the latter. Most people want computers that simply just work. This is The Reality That Is The Desktop.
  • people EXPECT perl to be in /usr/sbin/perl

    sbin??? I seriously hope your users don't expect a statically linked perl. As for the "right" place of perl, there are many. /usr/bin/perl and /usr/local/bin/perl are common, depending whether your vendor shipped perl with the OS or not. However, another common place is /opt/perl; common enough for perl's configure to use a different file layout for installation prefix paths with and without perl in them.

    There is a reason for (almost) everything in *nix based systems, including the organization of directory structures. this was all "planned out" - well evolved actually

    It maybe be evolved, but it has evolved in a huge mess. There's /bin and /usr/bin, and /sbin and /usr/sbin. Where do we find a shell? On some systems, in /bin/sh, others quote some standard and put it in /usr/bin/sh, and yet other systems have /bin symlinked to /usr/bin. And binaries go in bin directories, while libraries go in lib? Sure, but what's sendmail doing in /usr/lib then? /etc is for configuration files you say, but what are all those executable programs doing in /etc and below? (Some systems have more of them than others).

    Every UNIX vendor and every sysadmin has its own idea of a proper layout. And the result is that you have some vague idea where a certain file might be located, but there's a myriad of exceptions.

    You are in a maze of twisty little Unices, all different.

    -- Abigail

  • What happens when you have 20+ servers,

    You use NFS, AFS or some other shared file system. Or in case the servers are connected by sneaker net, tapes.

    -- Abigail

  • That's not quite what I meant by "function". The dependency is still on something called "smtp-daemon". If something with that name is there but it doesn't function in the way that the package expects, it's completely undefined what happens. Maybe the user will get an error message at some point, or maybe his mail will just disappear. Depending on a package called "smtp-daemon" or on a file called "/usr/lib/sendmail" both is error prone.
  • On a slightly different topic, how is it that everyone's getting along so well on unstable? I tried 'apt-get dist-upgrade' on unstable (after installing a fresh Debian system) and the next time I rebooted, my system was so screwed that I decided to wipe the drive and start over.

    I was somewhat wondering how safe it was that it was installing IPv6 stuff in netbase as I saw it scroll by, and sure enough, after the reboot, the network didn't work. And not having access to the network somewhat left me stranded.

    Was this an extreme circumstance, or is this the kind of thing that eventually happens when you run unstable?
    --
    No more e-mail address game - see my user info. Time for revenge.
  • It's not perfect but it's some of the way there. It also has autoupdate functionality in the latest versions.

    ------------------------------
    [ from the rpmfind website [rpmfind.net] ]

    $ rpmfind -q --upgrade balsa
    [search for approx 30 seconds ... 28.8 Kbps PPP connection]
    Installing balsa will requires 9574 KBytes

    ### To Transfer:
    ftp://rpmfind.net/linux/freshmeat/libpng/libpng- 1.0.1-1.i386.rpm
    ftp://rpmfind.net/linux/redhat/redhat-5.0/i386/R edHat/RPMS/ImageMagick-3.9.1-1.i386.rpm
    ftp://rpmfind.net/linux/redhat-labs/gnome/suppor t/RPMS/giflib-3.0-2.i386.rpm
    ftp://rpmfind.net/linux/contrib/hurricane/i386/g iflib-3.0-4.i386.rpm
    ftp://rpmfind.net/linux/redhat/redhat-5.0/i386/R edHat/RPMS/libgr-progs-2.0.13-4.i386.rpm
    ftp://rpmfind.net/linux/redhat-labs/gnome/devel/ 1998052417/RPMS/imlib-1.4-1998052414.i386. rpm
    ftp://rpmfind.net/linux/redhat-labs/gnome/devel/ 1998052417/RPMS/glib-1.1.0-1998052414.i386 .rpm
    ftp://rpmfind.net/linux/redhat-labs/gnome/devel/ 1998052417/RPMS/gtk+-1.1.0-1998052414.i386 .rpm
    ftp://rpmfind.net/linux/redhat-labs/gnome/devel/ 1998052417/RPMS/gnome-libs-0.13-1998052414 .i386.rpm
    ftp://rpmfind.net/linux/redhat-labs/gnome/devel/ 1998052417/RPMS/balsa-0.2.0-1998052416.i38 6.rpm
    Do you want to download these files to /tmp [Y/n/a] ? : n
    $
  • A rich XML based syntax? Just a pet peeve of mine, but why does everything have to be XML based? It's like Microsoft saying that C++ is the first language to have the ability to have XML-based comments embedded in the code. Yep, it is. But is =head1 really any better or worse than <head1>?

    As for your requests for functionality, perhaps you should read Installation and package tools document, version 1.0 [freebsd.org] by jkh over at the FreeBSD side of the world. While I know I'll be burned at the stake for saying good things about FreeBSD on slashdot, that document has some excellent thoughts which the Linux world could also benefit from.
    ----------------------------
  • From the Freshmeat editorial:

    RPM packages can't be configured interactively. They won't ask you if you want to keep the current version of a configuration file, install a new version, or compare the two versions. They won't stop services before updating and restart them afterwards. The MTA won't ask you a few questions and configure itself. They won't even issue a warning to the administrator. We need to walk one more step to have it work properly.

    ...it would be a valuable addition to RPM to provide some mechanism to configure all unpacked but unconfigured packages. Pre- and post-install scripts, as well as pre- and post-removal scripts, should be executed appropriately to allow smooth upgrading of running services. Package maintainers would have to add such scripts to their packages and make sure they'll not break the system the package is being installed on, keeping in mind the diversity of RPM-based distributions out there. Auditing, predependencies, package flags, and auto-deconfiguration functionality must be implemented.

    This doesn't sound like someone acquainted with RPM's script-calling features to me.
  • chattr +i makes the file immutable, +u makes it so when the file is deleted, the contents are saved.

    or if you're in the BSD land, then it's chflags schg to make the file system immutable, sunlnk to make it system undeleteable, or ucgh/uunlink for user immutable/un-deleteable versions of the above.
    ----------------------------
  • Yeah on a board full of 15 year old Linux zealots mentioning Office 2000 is silly but hey, fuck you. Office 2000 has a very very very good installation/management tool, one that I have not seen equaled before (not that one doesn't exist, I just haven't seen it personally). You get to choose which components to install, it checks for all dependancies and will fix things that end up broken, you can have part of the installation on CD part on disk and park on the network and it will all work pretty seemlessly, and you can keep everything updated and good to go from the internet. I really wish the RPM system could do all of that.
    Of course Office was designed to work with the installer which is a big help there. I think that goes to show the people putting out a program are responsible for the install and setup of said program. Know the limitations of your distrobution medium, don't write shit that a majority of people can't use because they don't have some exclusive kernel hack or obscure/outdated/stupid library (that is unless your intent is for people not to be able to use it without these things). I remember back in the day of Windows 95 when people wanted to do graphics or animations in their apps they would just use Quicktime because it was well established and meant less work for them. The problem was that everyone used a different version of Quicktime and installing a program fucked everything up. The situation shined some light on the major drawback of the installation wizard for Windows, the lack of context logic which would let you keep your updated version of a dependancy and not fuck things up because of it and that programmers were being lax with their dependancy management. The article points out the same problem with RPM and how things are going now. I hope everyone learns their lesson and package systems manage whole systems well (not just a single program) and programmers make sure their shit works unexclusively.
  • Your list of functions is useful but it includes a great mixture of different types of things, and it's worth breaking them out:

    - some that depend on Web-based, collaborative package reviews (which don't really exist yet and IMO are a big need for open source). This needs addressing, ideally using websites that have proper XML tagging so that programs can extract the reviews and search/analyse them more easily.

    - some that depend on the package (e.g. help files or intro docs, and the ability to browse package docs before installing).

    - some which are true package management issues, e.g. don't install packages that would require upgrades to libraries already in use. This is an example of policy - it would be helpful if a standard approach to such policies could be defined, then the various packaging tools could make sure they support this, and the GUIs could make it easy to specify this.

    I think a lot of these issues are being addressed, but in a piecemeal fashion, e.g. Freshmeat.net is great on listing packages but not on reviews, various packaging GUIs may make it possible to more easily browse docs and specify more complex policies.

    It might be useful to have a packaging framework initiative that tries to encourage various efforts in these areas and acts as a central point of information and even standards (e.g. standardised XML tags for reviews).

    My main issue with packaging tools is that even with GUIs they require a lot of user expertise - first of all, where to find the RPM, then checking it's the latest version and compatible with your system, then which mirror to select for a fast download (a separate problem but one that should be automated, see the SPAND project).

    Then there's the issue of managing or archiving the downloaded packages once installed. It would be useful if there was a log file of all installed packages + how they were installed, held in the archive directory, so that you could just back up this directory to get a fairly quick and dirty restore of this system (or to ease mirroring installs on other systems).
  • The responses seem to be these:

    1. "Windows uses shared libraries" But I don't think installation on Windows is too great either. They invented the term "dll hell".

    2. "But glibc is huge!" I don't think he was talking about glibc or xlib or anything else you always get on a stock system. He's talking about libart and gnome libs and Qt: the files that are causing the problem! I don't see any reason why these cannot be static linked.

    If we are worried about memory, add a hash encoding scheme for readonly pages so identical pages are in the same memory. This will result in probably more savings than shared libraries. If we are worried about disk space create a file system with the same type of hashing.

    Also, what is wrong with source code? How about designing a "packaging" system that COMPILES the program. I believe users are willing to wait for the compilation, if only it was automatic.

    The files for a package can all remain in a directory despite the problems with the Unix file system if you use symbolic links from the proper place to the package directory.

  • Looks to me like everybody missed one: the POSIX package format. It has been in existance for quite a few years now and is used by various other OSes. There was even a Linux version (done by Unifix, the same people who did the POSIX certified distro).

    It's been at least 2 years since I used the POSIX package management tools, but as I recall, it did all the dependancy stuff, pre/post install scripts, in-place upgrades etc. It even knew about architectures. Packages could be local files, on a tape device, read from a server and so on.

    No doubt, it doesn't do everything you ever wanted, but I'm not aware of any readons why it would be a bad alternative to RPM/DPKG.

    -- Steve

  • I agree that apt is awesome. But dselect? dselect's lameness is the reason for apt's existence

    dselect is based on apt. apt is the base of the package system. What you're thinking of is probably console-apt, known to most through apt-get, the central control binary.

    apt is a frontend to dpkg.

    Sorta. See, there's only one program in the whole bunch that actually knows the dpkg file format, and that's dpkg-deb. Apt is really a system for updating files. Any file. Apt can be easily adapted to any group of files that lend themselves to being packaged. You could use apt to get the latest Slashdot stories, if you wanted. apt-get update to grab the headlines, apt-get install some_headline to install the comments... &c.

    The apt system is pretty complex. It was hard for me to get my mind around. Hell, I've been using Debian for around a year now, and I just recently 'got it'. Most people never have reason to learn, so they don't.
    ---

  • A few comments on remarks you made, in roughly the order that I encountered them:

    1) Nobody here says they defend a person's freedom "to do what they wish without being harassed", regardless of what it is that they wish. People defend a person's freedom to do the things that they ought to be able to do, and that's usually the result of a trade-off between the rights of the do-er and the rights of the do-ee.

    2) Your sexual *orientation* is not illegal, at least not in the U.S. If you're arrested, it's because of illegal acts you commit.

    3) Don't even THINK about comparing yourself to a holocaust victim.

    4) What technology would protect you from those who would harm you; who wishes to deny you said technology; and what do you mean by "harm" in this context?

    5) Yes, the word "pedophile" was hijacked just the way the word "hacker" was. "Pedophile" means, literally, "lover of children", and without its current twisted connotations, would more aptly refer to those who truly love children, such as their parents and grandparents.

    6) If the majority of pedophiles (current common connotation and usage, i.e. someone who desires sexual contact with children) have never done anything illegal with a child, it is because they have done the right thing and restrained themselves.

    7) I don't know what you mean by "valid" in the context of "[my sexual orientation is] just as valid as homosexuality...". An "orientation" is neither moral nor immoral. It simply is. Behavior is a different story.

    8) Nobody misuses the word "pedophile" to mean rapist or murderer. However, some pedophiles also happen to be rapists (which they can become simply by acting on their orientation) and murderers. What I find hypocritical is the fact that YOU are the one misusing the word "pedophile" to refer to someone such as yourself who, by your own admission, desires sexual contact with a child, when in fact the word should refer to someone who *truly* loves children and wants to protect them from people such as yourself who would engage in sexual behavior with them if they had their way. Your comments here are akin to those of a *cracker* who complains about the abuse of the term "hacker".

    9) If you truly had a deep understanding of children, you would understand, as the rest of us do, that children lack the maturity, wisdom, and judgement required to give INFORMED CONSENT to engaging in various activities, such as sexual activity, and that an adult, any adult, has too strong an authoritative influence over a child, any child, for a sexual relationship to be legitimately construed as "consensual" in any meaningful sense of the word.

    In conclusion, if you feel the urge to steal, but you refrain, then you can function in society. If you feel the urge to kill, but you refrain, then you can function in society. And if you feel the urge to have sexual relations with children, but you refrain, then you can function in society. But if someone can't resist the urge to have sex with children, then perhaps that person should take Dennis Miller's advice and commit suicide, or, as he put it, "just lean into the strike zone, and take one for the team".
  • The thing is that RPM packages **cannot** have interactive install/uninstall scripts. Well, sure, you can create one of these things, however all you could do with them is use them yourself.

    All RPM packages shipped by redhat or any distributor (AFAIK) are **designed** to be noninteractive, so that the vendor's installer can load them up as part of a nice graphical installation script.

    And I think that's right. I think that if there is any complicated configuration procedure, it should be made part of the application's internal configuration screen. Package installation and uninstallation has to be as simple as possible, no more complicated then copying a bunch of files, and maybe running a simple script, or two. That's it.

    I don't want to turn Linux in 'doze, where you have to screw around with a bunch of convoluted questions during installation, then, after it's installed and you realize that you've fscked up, you have to go back and reinstall the bloody thing. No thanks.

    ---

  • by josepha48 (13953) on Sunday September 17, 2000 @03:47PM (#773184) Journal
    Even windows does not do static linking. Windows has all those dlls that come with the packages. I am sure that Mac is probably the same. Static linking has its place, but not for everything. dll save space. If you have all static programs on your machine all programs would get to be big giant monsters that were memory hogs as well. You would not be able to dynamically load and unload dll's either. Sure you may have a faster program, but that extra speed would not be all that much to a shared object and ususally is not needed.

    This is really off topic.

    Now about rpm and apt-get which is what this was really about. RPM does allow you to save configs, that is whay I usually have all these .rpmsave files after I upgrade a program. I think it would be nice if there was a switch that could be passed to rpm like 'rpm -Uvh --keepconfig .rpm` to save my current configuration file without creating the rpmsave files.

    rpm is not perfect. debs are not either. Last time I tried to install debain it did all its package checking at the time I selected the package rather than let me select the packages and then when I was ready to install tell me what dependancies I missed. I like rpm cause I think it was easier to learn than deb's, just my opinion though which where i am I am entitled to!

    Now apt-get may work with rpms. This is good. So when does that project get started?

    A second feature that would be real nice in rpm is more than just an rpm problems. It is a *nix make problem. Makefiles are not all the same. Some packages use automake and autoconf to generate the Makefile while some people just use make. Then there is really no way to know what it being installed if you are not the packager and without looking at all the sources. If I download a tar.gz and want to have an rpm out of it making that specfiles can be a real pain in the but, and half the time you can never be sure that you got all the files correctly. There has got to be a better way! If a program is in perl it may not use configure scripts it may just use a perl script. Maybe what needs to happen is a concerted effort into creatinga new system management tool. One that takes a snapshot of the system before a package is going to be insatlled and then one that takes one after. Window is doing something like this and allowing users to go back to a given point in time. So you could take a snapshot of the system before you install a program and then after you install it you can see what is changed. If rpm had this capability rpm would see what files had changed after you installed some programs ie after you do a make install or whatever. Those changes could be updated in the rpm database. If something failed the rpm database could roll the system back to before the fiels were installed. It needs some way of doing this. Maybe a program that could be turned and off at will. You could turn on the system file watcher and after you do a make install it would give you alist of fiels that have changed. It would have to be smart enough to ignore /proc of course and /tmp (or configurable to ignore certain directories) hmm ....

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Right.
    And for the most part, getting those depedencies resolved is *easy and fast*.

    With rpm, it's fairly simple.
    With apt + deb, it's dead easy. (not judging either.)

    With windows? Ha.

    Linux is hard to upgrade??
    Which distro do you mean?

    Redhat isn't too hard, although it can be a bit messy.
    Slackware is a bitch.
    Debian is dead easy.

    Each has it's flaws and strengths.
  • ... especially when I have to type the entire stinking path in just to execute the program.

    There are lots of simple solutions to this "problem". How about using an alias? Using csh/tcsh:
    alias playquake "cd /usr/local/bin/quake2 ; quake2"
    Add it to your .tcshrc file and you never have to worry about it again. (the bash/ksh equivelent is left as an exersize for the reader). Or you could just write a script file called playquake that does the same thing and put it in a directory that IS in your path (/usr/local/bin would be the logical choice). If you must have a directory called "quake2" hanging off your root directory, why not just create a symbolic link:
    ln -s /usr/local/bin/quake2 /quake2

    What you are whining about is a FEATURE, not a bug -- *nix systems are designed around a standard directory tree. RPM puts Quake under /usr/local because that's where things like that are supposed to go. That way, you can go to ANY *nix system in the world and find whatever it is you are looking for. Learn somthing about *nix systems administration before you go crying about non-existant problems.


    "The axiom 'An honest man has nothing to fear from the police'

  • True windows programs do use there own dlls. But they are dll or dynamically linked libraries. But they also use the system dll's. does MFCdll rign a bell? It shoudl it is the foundation of lots of windows programs. As well as many other shared dll. OLE or 'object linking and embedding' woudl make IE a huge monster.

    As I said I was not sure about mac. however you say Mac programs are staticly linked, in that when you install a program, it's just one large application that functions on it's own,. Installing into one folder is not the same as statically linked. Statically linked means that the dlls are part of the executable. Im sure that if you install programas on a Mac and it is not one file that gets installed you are useing shared objects.

    What you are actually talking about is 'where' prorgams are installed as opposed to how programs are compiled. 'static' and shared refer to how are program is compiled not how it is installed.

    Now if you look at a RH distribution. They are working on that one putting things in one place. Part of that problem stems from the unix file system. It is inherently flawed. It is flawed in the fact that it was set up so that if you want to test software you install into /usr/local/{bin,lib,sbin, etc} if you are installing into the sytem you install into /usr/{bin,lib,etc}. This comes from early UNIX not the packaging system. In looking at redhat most of a program gets installed into /usr/lib/ and then the executables are in /usr/bin. This is done cause /usr/bin is in almost everyone path. If they kept all teh executables in /usr/lib/ then you woudl need to add each dir to your path and then logout and login to have the changes take effect or 'export PATH=$PATH:/usr/lib/` (assuming bash shell here) after each install. Maybe someday people will do that under unix. Maybe the tree will be /usr/packages and then set the env for the system to include the new program and all. Each package then can have a bin,lib. The problem with this is that you eventually will run out of env space when your path gets to be 30 pages long.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • "apt-get was what RPM was what meant to be."

    That doesn't mean that that functionality can't be built into RPM. I found debian's tools clumsy, and the documentation was hideous. There's no need to have a whole handful of tools just to do package management.

    At least Redhat put a lot more effort into documenting how RPM works. There's less of a learning curve on average, and that's why RPM will eventually succeed.
  • Or how about a binary incremental (patch) upgrade? I know that's not easy, but it would be really nice.
    ----
  • by Dionysus (12737) on Sunday September 17, 2000 @03:57PM (#773210) Homepage
    I used RedHat for about 4 years, and recently switched to Debian mostly because of dpkg.

    I'm sorry, but dpkg as a package management tool is so superior to rpm, that it was like going from Windows to Linux again. Yes, Windows is nice if you've never experienced anything else, but once you tried something better, you really don't understand how you could have been so blind.

    Redhat 5.x used rpm 2.x. Redhat 6.x uses rpm 3.x. Redhat 7 will be using rpm 4.0. Guess what, each major version of rpm is incompatible with the previous version. This coupled with the fact that I have never successfully upgraded a RedHat system (I usually either reinstall or do a manual update, as in go through my list of rpms and do a rpm -Uvh package.rpm from the CD, making sure I don't upgrade packages that are newer than those on the CD).

    In Debian, I changed my sources.list and did a 'apt-get dist-upgrade'.

    apt-get install will ask stop and ask for configuration issues. I have yet to find a rpm package that did that.

    rpm -bb -clean --rmsource package.spec is supposed to compile, create rpm, and remove the source and spec file according to the docs. It never removed the spec file in rpm v3.x

    lets say I wanted to remove xanim. I have aktion installed dependent on xanim.

    In dpkg, I would do:
    apt-get remove xanim
    Since aktion depends on xanim, dpkg will ask if I want to remove xanim to and *do* it in the same step.

    rpm -e xanim
    Failes because aktion (if the dependency is even set up) has something dependent on it. You then have to do a rpm -e for each dependent.

    Quite frankly, having used both, I just like dpkg much better, I wish all Linux distributions would just bite the bullet and standardize on it. Heck, if each major version of rpm is uncompatible with the previous version, it's no harder to go over to dpkg than to go to the new rpm (just write a tool that convert the rpm database to dpkg's text based database).
  • Nice article... cept i think the comments were more interesting than the article itself... I use rpm alot and it really annoys me sometimes..i do agree with the fact that configuration should be based in the program itself and for there to be a default config and be able to reconfig within the program... packet management is exactly that managing packages and making sure their in the right place and all dependencies are accounted for...not to be messing around with config files within the actual programs i think thats beyond the scope and function of packet management... i sure would like it if someone would do something about the dependency problems that crop up when ur installing packages...at least link us to where we can find the dependencies heh... All in all rpm and apt are a god send...especially for newbies but the can really be a pain in the neck sometimes...and the fact that when u wanna update packages u have to download the entire binary again maybe slightly bigger...patching anyone??
  • by kcarnold (99900) on Sunday September 17, 2000 @03:59PM (#773215)
    As far as operability on a completely mucked system, I have on occasion relied on the (nice) fact that a .deb is really just an 'ar' archive.

    Say I wanted to forcefully reinstall a package, not caring about the database and such, just get me my program back:

    # mkdir /tmp/package-extract
    # cd /tmp/package-extract
    # ar x /path/to/archive/like/var/cache/apt/archives/file. deb
    # cd /
    # tar zxvf /tmp/package-extract/data.tar.gz

    control.tar.gz in the same archive contains all the scripts and such, so you can even run those manually if you need to. And the package database is (for better or for worse) ASCII anyways, so even if you only can get 'ed' working, you can mess around with it anyway.

    I used Red Hat and rpm for about 6 months. Then I discovered Debian, and liked it a lot better, largely because of its package management. rpm can conceivably do a lot of the things Debian packages can do, but Debian has it here, now. As for the multiple versions of packages advantage claimed by RPM users, I should note that Debian packages (most often libraries) can have a version appended to the end of the name, and many do. libc5 and libc6 are quite plainly two distinct packages as far as the package management system is concerned, even though they provide much the same functionality. This applies similarly with other packages whose maintainer(s) have judged that having two or more versions of that specific package on a system is useful.

    As for the file dependencies, I can see how that is a good idea (you execute, link to, copy, move, etc. files, not packages), but as the article mentions, it expands the dependency tree quite a bit, and I have personally had no trouble with Debian's package-oriented package management (if you depend on one file in a package, you likely depend on, or could somehow benefit from, the rest of those files, and they would get installed anyway when you installed the package). Need the GNOME headers? apt-get install libgnome-dev. Not brain surgery, and beats Windows's install-remove system by a landslide. Mac uninstallers can leave things behind also, but being able to just throw away the app's folder, preferences, and in some cases control panel and extension, is a very nice idea IMHO, but it doesn't scale well. I'd like to see what OS X does. (pssst, anybody got the CD? Can you post an ISO?)

  • Hate to say it, but as we expect more and more from our packaging mechanisms, we have to lean more towards a central repository, or "registry" if you can stand it. Not that that "registry" has to be in some funky proprietary format, but there needs to be some central place where apps and configuration utilities can get some info about their installation, configuration, and removal. I mean, the passwd file itself serves as a "registry" of user account info that other applications are dependent upon. This is fine.

    I think there is some project out there to standardize configuration files. One of the suggestions was XML. While I think for very simple configurations XML is going overboard, I do think some standardization needs to be made. When I configure an application, I don't want to have to learn a new configuration format. And I would like all installation and removal to be done in a uniform seamless manner for all applications. So, in short, I think we need standardization on these issues. Since *everybody* has to install and remove stuff on different systems at some point, "choice" is not a perfect alibi here.
  • Hello...

    rpm --relocate /usr/local/games/quake2=/quake2 quake.rpm

  • All this piece really says is that developers and packagers need to do a better job of keeping dependancy lists and such in good order. RPM has some great features, and Redhat has made a lot of progress on it feature wise - version 3 is far better than previous versions, and includes the pre and post install script capability as described in article.

    I see this as less of a problem with RPM and more of a problem with developers. This is not to say that RPM couldn't be improved - it could run ldd or some such library dependancy program on binaries to help out the situation, or some modified version of strace that checks to see which files it accesses, but in the end it falls to the developer and packager to make sure that the package is works correctly and lacks dependancy problems.

    BBK
  • The problem with all of the library dependencies isn't really the packaging system, it's the fact that packages are distributed in binary format. That has disadvantages..

    Dependences cannot be expressed easily. (Like based on the compiler or another library.. There's a reason that every library number increases whenever a new version of glibc comes out.)

    Someone tries to install two packages that require the same library, but different versions, at the same time.. BOOM.

    Maybe Bernstein had it right.. Binary distribution sucks.. Use the source.. This is (IMHO) How freebsd gets around the problem.. By forcing you to compile every package as you install it.. This isn't QUITE so bad for installing a couple individual packages.

  • Ah, but you can just do

    ln -s /usr/local/bin/quake2 something_in_your_path, and it isn't an issue.

    it's *significantly* easier for system administrators if everything installs into well-known locations. and it wouldn't surprise me if a lot of programmers haven't actually *tested* under custom installations, which is admittedly lame, but what can you do?
  • The comment about RPM being adopted by the LSB bothered me a great
    deal too. Is there any foundation for this claim (I have not
    heard anything about it either)?

    If package managers are to be merged without Redhat and Debian
    being forced to agree on a single package/configuration policy, then
    it is clear that there needs to be something like a DISTRIBUTION_POLICY
    database that governs the behaviour of the package manager. Making
    this work just possibly might be one of those things that is easier
    than it sounds...

  • I don't understand why someone doesn't just write the damn thing. The rpmfind database seems to have all the necessary information, why doesn't someone just write the wrapper that will make it work like this:

    #rpm-get windowmaker update
    Installed version of windowmaker is 0.58
    Found windowmaker on rpmfind.net.
    Latest version of windowmaker on rpmfind.net is 0.62
    Update? Y
    Retrieving windowmaker-0.62-2.i386.rpm.......
    Needs libpng >= 1.03. You have libpng version 1.02 installed.
    Update libpng? Y
    Found libpng on rpmfind.net.
    Latest version of libpng on rpmfind.net is 1.02.
    Found libpng on sunsite.unc.edu.
    Latest version of libpng on sunsite.unc.edu is 1.03.
    Retrieving libpng-1.03.i386.rpm......
    Package gimp needs libpng.s0.0 from libpng version 1.02. [F]orce upgrade of libpng, [i]nstall new over old, or [a]bort? I
    Installing libpng..........
    Updating windowmaker........
    Keep local copy of RPMs? N
    Deleting libpng-1.03.i386.rpm.
    Deleting windowmaker-0.62-2.i386.rpm.
    Finished.

    That's what I would like to see. I know, "code it yourself."
  • I don't see anything that can be done with Debian packages that can't be done easily with RPM.

    When it comes to limitations, both packages share them. Both of them only specify dependencies by name, rather than by function. That makes it impossible to assure that a particular installation is actually working or how to fix it if it isn't.

    Neither of them requires test cases to test an installation.

    And both systems allow arbitrary install/deinstall scripts, making it impossible to write general tools that analyze automatically what happens during package install/deinstall.

    Rather than spending time making one more like the other, we should stick with what we have and worry about the next generation packaging tool, which will probably have to be started from scratch.

  • I've used apt and rpm with autoRPM. Apt wins hands down in my book.

    Apt is absolutely perfectly suited to the task at hand, which is getting everything you need together to install a particular piece of software. The task flows completely smoothly with never a hitch.

    RPM is more of a hackish tool. There's always some trial and error. There's nothing wrong with this in general, other than I usually have other fish to fry.

    Example: the other day we had a cert advisory story related to a vulnerability in wu-ftpd. If I were on a debian system I'd have that sucker set up to be fixed in about a minute and very shortly thereafter all would have been made well as I went on to do other things. Instead I spent a nice Sunday afternoon making repeated trips rpmfind.net. RPM fans who think I'm a moron are welcome to point out that I should have used the --do-it-intelligently-this-time switch. My parents installed me with the --i-can-take-it option. I'll fully admit I'm lazy. I want my package management system to take the objective I handle and take care of all the details, such as traversing the dependency tree, fetching all the needed files, verifying their cryptographic signatures, installing them and configuring them, while I selfishly take all the credit.

    RPM fans are always saying "you need to learn more about RPM". Fair enough, but what's sauce for the goose is sauce for the gander. You need to learn more about apt.

    I'd be interested in hearing from others who've tried both.
  • I had a brief scan of those chapters. It looks to me that you are
    looking at package management mostly from the point of view of the
    individual user working within a relatively homogeneous local system.
    Is this right?
  • Both of them only specify dependencies by name, rather than by function. Actually, there is a hack to make Debian depend on function. For instance, mutt depends on a mail server to work. Now exim, sendmail and qmail will all satisfy this dependency. Create a virtual (forget the official name for it) package that mutt depends on, for instance smtp-daemon package. Make exim, sendmail and qmail satisfy this dependency. So, dpkg will satisfy by function rather than package name.
  • apt-get/.deb/dselect are SOOOOO amazingly cool.

    I agree that apt is awesome. But dselect? dselect's lameness is the reason for apt's existence.

    comparing rpm and apt is comparing apples to oranges. rpm == dpkg. apt is a frontend to dpkg.

    this article is about taking the auto-retrieval features of apt (which everyone loves) and applying them to rpm -- ie, using apt as a frontend to rpm instead of dpkg. So, therefore, you could combine the (implied) superiority of the debian system's auto-retrieval with the ubiquity of redhat's rpm format. Personally I'm going to keep using apt as a frontend to dpkg, keep installing .deb packages, and not worry about it... what does worry me is the author's dismissal of .deb, suggesting that rpm may become part of LSB (something I haven't heard about) and implying that therefore .deb might as well give up.

  • ... and it comes with the dpkg-reconfigure command, so you don't have to reinstall to reconfigure the package.

    -----

  • Static and dynamic linking both have their uses. Neither one should be used exclusivly. Dynamic linking makes sense for pervasive code -- glibc is a perfect example for this. If 25% of the programs on a system need a common piece of code, then it makes sense to dynamically link that code. If only a half-dozen programs need it, it should be statically linked.
    For the desktop market, I agree that compatability is far more important than efficiency or elegance. If you are selling Barbie's Magic Funhouse for Linux, you want to make sure that it works under as many different configurations as is possible. Windoze software is pretty good at this -- Joe Sixpack can stick the Barbie's Magic Funhouse CD into the drive, hit the install button, and be fairly certian that little Suzie isn't going to be dissapointed on Christmas morning.

    Personally, I think a full-blown desktop is overkill for the average consumer. The needs of most people would be better served by a console-like device (Think Playstation 2 / X-Box). The OS it runs is irrelevant to the user -- all they ever see is the application they are running. You want to surf or send email? Stick the Netscape CD in and away you go. Want to write your term paper? Put in the StarOffice CD. Want to play a game? Stick the game CD into the slot. Maximum flexibility, minimum fuss.

    Geeks and techies need full-blown computers; Joe and Jane Sixpack don't -- they want somthing that's as simple and reliable as a video game console or a VCR, and even that is pushing some people's capabilities to the limit. (Flashing 12:00 syndrome, anyone?)


    "The axiom 'An honest man has nothing to fear from the police'

  • by Arker (91948) on Sunday September 17, 2000 @01:19PM (#773254) Homepage

    These guys are making a fundamental mistake. What they want is to make .rpm act like .deb. This is not going to happen in any meaningful way. .RPM and .deb are the result of fundamentally different design philosophies.

    Yes, it is possible to make apt work with .rpms - but this will ONLY work satisfactorily with .rpms that are written with this in mind. The reasons given for using .rpm instead of .deb here basically boil down to there being more .rpms and more people using .rpm - but since all new .rpms will have to be produced to work with this system anyway, this is a false advantage. The installed base, the already existing library of .rpms, are going to be useless to this project in any case.

    Obviously what they should do is just use .deb. The pre-existing base for .deb may be smaller than for .rpm, but it's infinitely larger than the pre-existing base of .rpms that contain the information needed to make this work, because that set doesn't exist at all.

  • Thanks for reminding me of something ;) Someone asked me where I thought stow was a bit flaky/immature. One of my big gripes was its inability to really upgrade. You can completely remove a tree and re-do everything, but it's all done by hand. I think this is just because stow doesn't know anything about versions. However, it would be good if I could do an upgrade more automatically.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • umm, he put the link in /usr/local/bin (which is presumably in the path)
  • by hatless (8275) on Sunday September 17, 2000 @01:25PM (#773271)
    RPM's got its flaws--in particular the bloat of its cross-dependency database once a system's gone through a few big waves up upgrades. But Mr. Matsuoka--and Mr. Covey--show themselves to be surprisingly ignorant of RPM's capabilities.

    What set off little bells in my head was his contention that RPM can only update files and doesn't run pre- and post-install scripts and can't prompt users for parameters and install options for the packages in question.

    This is just plain wrong. An RPM can contain and run both pre- and post- scripts both during install and uninstall operations. Plenty of RPM packages containing shared libraries, for example, silently run an ldconfig after installation. RPMs of things like mysql and postgres often do a lot more--initializing a database, setting up default db users and, yes, prompting for things. If his SMTP MTA packages don't prompt for anything, that's the packager's fault. Hopefully Mr. Matsuoka's job at Connectiva isn't as lead packager for their distro.

    It would be nice to see both apt and RPM adopt a rich XML-based standard for expressing prompts, defaults and so forth for interactive installers, along with a way to express what prompts can be silenced and with what effect, so that text, widget-independent GUI, and web-based (among others) interfaces to interactive installation can be built without breaking anything. But this wasn't Mr. Matsuoka's complaint. He seems to think RPM's can't be made to prompt users or execute scripts.

    As for Mr. Matsuoka's other contention, that RPM needs changes in order to allow smooth auto-updating of packages, this too shows inexperience. I'm sure users of, say, Helix-Update, RedHat's Priority Update tool, and for that matter the fully-automated silent autoRPM utility would be surprised to hear RPM lacks such functionality. That he doesn't know this is kind-of-sort-of excusable, since it's not covered in the books and documentation for RPM. Not so the pre/post install script stuff. That's in the docs.

  • Well, I dunno. Just seems like a lack of polishing to me, but this is of course just my opinion. At least on my system, it takes forever to 'stow -D <package>'. In the time it took to unstow a librep-devel package I had installed, I had written a script that would find all the files in the /usr/stow/librep-devel-<version> directory, and remove them one by one. It didn't check to make sure that they were Stow's files, but it tells you something anyways.

    A few other little things, like one-step unstow/removing. The fact that it relies on the user's current directory, and on the directory Stow is installed in(for default options, anyways).

    :) Anyways, I like the concept! :) Ya can all check out my more in-depth thoughts at http://dharris.twu.net once I get it finished.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • by fluxrad (125130) on Sunday September 17, 2000 @01:25PM (#773273) Homepage
    i'd like to make a little note in the defense of RPM's. I like 'em. I don't use 'em much, maybe thats why i find RPM so appealing.

    When you're installing an application for linux, i am of the opinion that it's best to do it from source. The idea of pre-compiled binaries just doesn't sit well with me for quite a number of reasons. Foremost, i don't like the idea that i'm using a binary build on someone else's box. And i certainly don't like the fact that i don't have the source sitting in front of me to hack, if i'm bored, or to just peer into, etc. (pls no posts on src.rpm)

    RPM's, i feel, are great for the little utils. Miscellaneous files that i need for x without wanting to download and compile those binaries on my own. I just download an RPM and whabam! it's installed. Dependency problems? --nodeps. Won't install for some other reason? --force.

    Maybe there are issues to be addressed if you want RPM to become your standard installer for absolutely everything. Yes, Forest Gump needs something more powerful. My question is simply why argue about RPM or apt when you've got source?

    It's like arguing whether you'd like to buy a pinto or a ugo when you can get a porsche for free. (some assembly required ;-)


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • If you alone are in charge of the machines, and have the time to do this, that's great.

    For major packages like apache and such, yes, it's best to do them yourself, from the ground up, so you know exactly what's up.. however.

    What happens when you have 20+ servers, and a couple other staff who have to be taught to maintain them? Sure.. you can all do it.

    What I've found is that Debian is fantastic for this, for servers. I trust that the debian folks have done a rather thorough job of their package tree, and it is SO easy to maintain. I can do a hundred systems, all the same, all in sync, easy.
    Am I saying this is the way to go? Well, it was the way to go for me, in my situation.

    Why argue? Source is source. we already have that.

    Let's say you have a particular way of compiling things. Lets' say you also have a hundred servers, in eight cities. Perhaps you would set up a central archive of things you have compiled so you don't have to do it for every single box. You'd probably add some scripts and tools to help you update all those boxes with new versions, and to allow you to revert back to previous compiles of things. I would HOPE you have revision control for your servers to some degree.

    Now you could do this, and do it well..
    but what you would have is your own package management system.

    Let me say.. from personal experience, I don't like rpm in practice. I like it as a tool, but it just gets too messy in practice. I DO Like to compile my own if possible. RPM is usually a last resort when I'm in a hurry.

    With Debian, I don't have this problem. If it comes out of the standard debian archive, then I can trust it. IF it isn't there, then I compile and install in /usr/local.

    and THAT is why I have package managent.

  • deb is not centralized at all, only for convenience.

    apt typically uses between 2 and 20 different sites to fetch files from. Dependencies are not centrally stored, but contained within each .deb

    apt keeps a chache of everything available, and takes care of resolving dependencies.

    so the typical behavior is 'apt-get update' (to update the cache' then apt-get upgrade (to check for newer versions of installed packages) or apt-get install package (to install a package) or apt-cache search string (to search the cache for a string)

    BTW.. the original article at freshmeat is about exactly this.. it's not saying that .rpm and .deb suck, it' ssyanig that rpm has a few features missing that make it fundamentally difficult to integrate it with apt. IT also points out weakenss in .deb. RPM does more, but is missing a few things that enable this tight form of system management.
  • Here I was doing it the hard way making a symlink in my ~/bin

    So, what happened to everyones PATH variables?

    -- Abigail

  • by SomeOtherGuy (179082) on Sunday September 17, 2000 @01:33PM (#773288) Journal
    I have used RPM (Redhat for 3 years & Mandrake for a while) and now use Debian (for one year now). I guess my question is this: Would not it scare the RPM based distros to go with DEB, when it would be easier to only install a distribution 1 time. I mean their has to be something to the fact that each time I walk into Best Buy or Comp-usa, I notice a new point release of Redhat, Mandrake, Caldera, or Suse...I think apt-get update;apt-get dist-upgrade would just rain on that parade.

    What do you think?
  • See, you should just put /usr/local/bin in your PATH. Then you can type 'quake2' from anywhere and have it start up, assuming that quake2's environment doesn't look for anything in '.' .

    I agree with you. The UN*X directory structures can be really silly at times. My brain has trouble telling the difference between enviornment /usr/local/bin vs /bin vs /local/bin vs /opt/bin vs /local/opt/bin vs /Sys5v4/style/directory vs /ucb/style/direcory vs /redhat-linux/interpretation/of/the/posix/standard vs /debians/interepretation/of/the/posixs/standard vs /directory/left/around/for/backwards/capatabilty , they're all full of symlinks pointing to each other. ARG! It's enough to make me pee in my pants!
  • Here I was doing it the hard way making a symlink in my ~/bin

    ln -s /usr/share/local/games/quake2/quake2 bin/quake2
  • the "do-everything" button was an abstraction. you should have realized that.

    I did. I also realized that it was the wrong abstraction. :-)

    There's a big difference between "give me easy access to all the relevant information" and "figure out what to do". Now, I admit that there were elements of both in my original message. But as far as the package manager is concerned, the first is primary. Because once the ability to get all the information (and the hooks to act on it) is there, people can start figuring out how to make use of it. I think there's plenty of room for improvements that hackers would appreciate before we get into the realm of "the computer is smarter than you" idiot-ware.

    your views on what a package management system should be able to do for the user are off the wall

    We might disagree about what should be in the package manager proper, but I maintain that the package manager should facilitate some of the potential front-end features I described.

  • by The Pim (140414) on Sunday September 17, 2000 @01:43PM (#773305)
    All of the package management systems duck the hard problems of creating a user-centered system management tool. Why can't I
    • ask why a new version of a package was released?
    • see a list of changes between old and new versions?
    • tell the system to apply only security or high-priority fixes?
    • tell the system to automatically process all updates except those involving specified packages, which I want to approve on a case-by-case basis?
    • tell the system never to upgrade packages that require upgrades of packages used by other software (eg, libraries)?
    • ask for packages that will help me convert GIF files to PNG?
    • ask for only packages targeted at beginners?
    • ask for only well-integrated, well-tested packages?
    • get reviews of a package?
    • find out how to get started using a package?
    • begin browsing the documentation for a package before approving a full installation?
    • have some help in configuration updates?
    This is an abbreviated list, but I've wished for some variant of each many times.

    Note I acknowledge that these are hard problems. I don't expect them to get implemented overnight. The problem is, I don't see anyone even trying. (Possibly Helix Code, it's too soon to be sure; at any rate, they will need the help of the distribution maintainers to go all the way.) Someone could make a great contribution by seriously studying what users need to take control of their systems and designing a solution, not just looking for the next hack.

    I use Debian personally. I think dpkg+apt has more cool hacks than rpm+autorpm (or whatever), but I don't think it's significantly further in the big picture. I do think it has more possibilities, given its philosophy and development community.

    Finally, I know some moron will jump up saying that rpm wasn't meant to do all this, and its developers intentionally limit its scope. I don't care whether rpm solves the problem, or a system build around rpm solves it; I do care that the problem isn't being addressed.

  • I have had experiences with both Debian and RedHat's packaging systems; i would definetly say dpkg is better. I run the Woody (unstable) version of debian, and packages are updated nightly. It is of great power to simply type

    apt-get update
    apt-get upgrade

    and have the machine automagically be brought up to date. Also, the dependancy system is much more powerful, and it enables the sysadmin to add packages easily at a later date (RPM does not have an easy method of [re,de]selection like dselect). If i use apt-get install blurglator it will always install the packages it depends on without help from the sysadmin.

  • by dbarclay10 (70443) on Sunday September 17, 2000 @01:48PM (#773309)
    I wish I had more time to learn how to program, and then actually program something. Many of you will be familiar with GNU Stow [gnu.org], and maybe even some of you had tried it. Well, I have. It's pretty nasty. While it works, it is cumbersome. But, I strongly think they've got the right idea.

    For those of you not familiar with GNU Stow, it allows you to install a program in an arbitrary subdirectory(say, /usr/local/stow/Quake2-version), and then it makes symbolic links, recursively, from the installation directory to the system directories. ie: /usr/local/stow/Quake2-version/bin/quake2 is linked to /usr/bin/quake2.

    I really think that's an incredibly good thing. For many reasons, and let me elaborate.

    If you're at an unfamiliar system, or you're using a rescue disk, you might not know of, or have access to a package manager. You can't add nor delete packages, and you can't query packages. You don't know what files a package contains, and you don't know to which package a file belongs. I feel it's imperative that you can accomplish all of those tasks with standard *NIX utilities(ie: ls, mv, cp, ln, rm, cat, etc., etc.). To see what files are contained in the aforementioned Quake II package, I'd just need to do a 'ls -R /usr/local/stow/Quake2-version'. To see what what package owned /usr/bin/quake2, I'd just need to do 'ls -l /usr/bin/quake2'.

    Of course, a good symbolic-link-based package manager should be a bit more complete. Now, RPM(I don't know about APT) uses a database to store its information. I gotta say, that's pretty stupid(no offense, RPM guys - I'm sure you have good reasons). At least, it's not very robust, from a system recovery/stability standpoint. So we want to get rid of a database. After all, we want to be able to manage packages with standard *NIX utilities, if we're really stuck in a bind. So, I guess each package would have some files, in its installation root(ie: /usr/local/stow/Quake2-version/), describing some things. Files named things like Requirements, Provisions, PackageInfo, PackageConfig.

    Requirements - Would have sections on both file-dependancies(ie: /bin/ls) if the package required individual files, and package-dependancies(ie: fileutils-4.0).
    Provisions - Would have sections on libraries and possibly a seciton on packages which the installed package replaces(ie: Postfix replaces sendmail).
    PackageInfo - Would have a description of the package, and some notes on how the particular package may differ from the standard source distribution. Also some user-friendliness things like the type of software(ie: System -> Libraries) and such.
    PackageConfig - This would contain the pre- and post-install scripts(yes, we really want to know what a package does!), and maybe anything that was done during installation based on any input the user gave.

    These are just ideas, and I don't have the skills or time to implement them, so don't take it to heart too much ;) To be honest, I don't think any new package management system will succeed unless it has compatibility layers for RPM and APT. Both on the shared-library leve, and on the command-line level.

    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • I've had significant problems with windows demoware. Often the demoware writers will "hide" a registry key somewhere which they look for later, so that you can't do multiple demo installs (particularly for time-limited demos).

    Tri-Plus WinSpace [winspace.com], which I used at a job which forced me to use NT, did this famously. I tried a beta, and they had to send me instructions on hacking my registry to install the full version because the beta (demo) had a broken installer.

    So perhaps you haven't had to do this, but trust me, other people have. Perhaps you just haven't lived life on the cutting edge enough to do this sorta thing, but I've had to do it several times.

  • The whole discussion thing will in the end boil down to something like a apt-get/Debian vs RPM/RedHat fight (..remember GNOME/KDE some days back ). Ok RPM doesn't have any automatic update feature ..so what ? aren't there scripts which can do it ?

    Just because the debian version seems/is/works better doesnot mean RPM is bad .. i for one find it very comfortable to work with and would hate to read the man pages again !!

  • by Diesel Dave (95048) on Sunday September 17, 2000 @01:55PM (#773318)
    The Linux Router Project is working on a radically different OS as well well as a new packaging system based on neither RPM or DEB.

    For the last 3 years I've done nothing but heavy development work and sysadmin. (For self and on contract) I've worked with Solaris, Redhat, and especially Debian, and can honestly say when it comes to 'real world' production systems all of them suck for long term system maintainance.

    Out of all of them Debian is still best all around. (System and packaging) But it's packaging system could be a hell of a lot better. (I'm not still running 'slink' on my box because I want to...)

    I think when we are done with our new breed OS, all the linux 'vendors' are going to be brought to task to look at how we did our packaging. Probably some of them will be adopting it. (That's is if we don't over take them all first. : >)

    This is a feature list of what we are working on:

    [name withheld]
    ==============
    Defined:
    A Unix type system software managment format, utilizing
    a logical hiearchy for root layout, with de-centralized physical data
    distribution.

    Features:
    No centralized package or physical data location

    Allows conflicting packages/applications to be installed at the same time.
    Package managment tool is able to enable or disable installed packages
    dynamically, while preserving package configuration autonomy.

    Generic and distributed nature. Multiple hosts can share the same
    package installation via network mounts, while preserving package
    configuration autonomy.

    Allows hand fitting of root components (outside of /usr/local) with no
    package conflicts

    Logical root extensions for chroot, remote host, or virtual machine operation

    'Open' physical packaging format allowing easy creation, extension, and future enhancment.
  • by Nemesys (6004) on Sunday September 17, 2000 @01:57PM (#773319)
    The RPM format may have limitations. What's being compared is RPM-based distributions and Debian GNU/Linux. The Debian system is in a different category, simply because it's SO MUCH BIGGER. All the packages, even for really obscure things, are managed by the same organisation and forced to conform to a set of rules, rather than there being a core and a contrib section.

    Don't look at the technology (RPM vs deb). Look at what the people are doing. What's going on in Debian's case is that they're doing a lot less (shoehorning each bit of software into their rule system) with a whole lot more. The sorts of things one finds in /usr/local (that is, not distributed with the OS) on a Unix you may well expect in /usr on a Linux system, since Linux distros ship them. The sorts of things which may live in /usr/local on a non-Debian system probably live (if they're DFSG-free) in /usr on Debian, simply cause it's broader.

    This is another reason Debian's so anal about its Free Software guidelines ... if it doesn't meet the litmus test, then you can't distribute some version of it with all the paths changed to match the Debian universe, essential for integration and stability.

  • What do you mean by "RPM will eventually succeed"?
    In many ways, it already has. But thinking that it will eventually displace the Debian package system or, indeed all other package systems, is absurd. Debian people find debs work for us, and we have no plans to change. I'm sure the Stampede people and the Slackware people feel the same way about their packaging systems. Life doesn't have to have one winner take all.
  • by hatless (8275)
    To uninstall a package, you give the name of the package, not the name of the file that it was installed from. In other words:

    rpm -e Mesa

  • I agree apt-get is amazing. *But* I think the big issue is not the package format.

    What I've found is that the Debian package maintainers do a lot better job of creating a good package. I've had crappy ones too... messed/missing up deps, a lot of hand modifying needed, etc.

    RedHat on the other hand doesn't seem to put as much effort, or maybe it's just a process thing.

    Other stuff like search for packages, seeing changelogs, etc... those are more tool issues than packaging issues.

    Anyways, all said (as a long time Slackware then RedHat user) I find the Debian packages more reliable. The upgrades actually work and they are easy to apply.

    Brian Macy
  • i think M$ makes an operating system that you might like to check out.

    Really, I thought they made Windows?

    I see half of linux users bitching that they want it all (these are mostly new-school linux users that i've seen) and i see another half that want's nothing more than a kernel and a keyboard.

    Trust me, I'm not new-school. But I think that anyone satisfied with the current state of Linux is badly lacking in imagination or ambition.

    I really don't want to see linux turn into an OS where you click the do-everything button and all of the sudden you're set to do whatever it is that you wanted.

    Did I say anything about buttons? Would it be better if I'd said I wanted to pipe the changelog to grep? Approve or reject upgrades with scheme code? Run diff3 --merge on the current, old, and new config files, starting $EDITOR on the result to resolve conflicts while popping up the new man page for the config file in another xterm? You can't do any of these things now because the information's not there, and the infrastructure's not there. Not because there are no buttons.

    Please don't assume that usability means baby talk.

  • I was just looking at FreeBSD's source because I'm working on porting their /bin /usr/bin & /usr/sbin (for starters) just for the hell of it. What I've noted (although this will be obvious to most readers:)

    1.) The whole system can be rebuilt by issuing a make.
    2.) there's a possibility of merely patching an existing system.
    3.) there's only one code base.

    It might be nice of the LSBP to keep a list of "current" projects and to help maintain a FreeBSD-like set of buildable dirs, allowing an entire system to be built.

    Alternatively, it'd be fun to put together a distro that does this--I talked to a guy around 4 years ago who was working on this at one time, the main problem being that Linux is merely a kernel and doesn't have a centralized authority as strong as other OSs. It would be nice to be able to, say, download a gzip'ed patch file, apply it to my source tree, and issue a 'make world' and have the whole system rebuilt before my eyes.

    *Sigh* I can dream, can't I.
  • by Jason W (65940) on Sunday September 17, 2000 @02:08PM (#773331)
    The author of the article must not have made an RPM before. Every specfile generator out there has a section for pre and post install scripts. Plus, there is no reason why you can't include other commands in the middle of other sections, as is appropriate on a package-by-package basis. As you mentioned, a /lot/ of programs run ldconfig.

    Configuration programs can be run also. Take the SSH RPMs for example. After installing the server RPM, it generated public and private keys. Saved me the trouble of doing it by hand, and made sure it was done right. I believe the client RPM even asked for a keyphrase. There's no explaination for what the author of the article said about not having configuration tools, except for inexperience.

    I do agree that standard configuration parameters would be a nice addition, but there is absolutely no reason why package maintainers should have to conform to anything other than the stand specfile pre- and post- install sections. It allows them much more freedom and ease of use to use the scripts that come with their tar files. Why bother converting them to yet another format?

    ----

  • i think M$ makes an operating system that you might like to check out.

    The fundamental problem is this. I see half of linux users bitching that they want it all (these are mostly new-school linux users that i've seen) and i see another half that want's nothing more than a kernel and a keyboard.

    What we need to find is a common middle-ground. I really don't want to see linux turn into an OS where you click the do-everything button and all of the sudden you're set to do whatever it is that you wanted. That doesn't promote intelligence about the OS you're using. Yes, computers are tools, but if absolute automation is the way of the future, the users are going to become a slightly different kind of tool ;-)


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • While everyone seems to be busy in the .rpm vs .deb war, here's a *very* interesting quote from the article:
    Thanks to the efforts of Alfredo Kojima of WindowMaker fame, now working at RPM-based distributor Conectiva (...). most technical issues have been quickly addressed and, despite claims that it couldn't be done, it actually works.

    So, depite the configuration flaws described in the article, apt-get's dependency resolution and package retrieval is already working for RPMs, so even those who dismiss interactive configuration as a completely useless feature will still be able to upgrade their RPM-based systems using apt-get. Kudos to Mr. Kojima and the Conectiva people for that!

    Now could the other distros adopt apt-get as quickly as possible, please? Otherwise I'll be switching to Conectiva's distro in my next reinstall.

  • If you don't like Debian, use Stormix then. It uses .debs too, *and* can still use the deb packages from the Debian website.

  • by Arandir (19206) on Sunday September 17, 2000 @02:57PM (#773343) Homepage Journal
    The small amount of time that I have been with FreeBSD, I am amazed at the power and flexibility of the ports system.Sure there's a few rough spots, but those are easy enough to polish out.

    For those that don't know, where the synopsis: the ports system is a collectin of makefiles and diffs to properly fetch, extract, configure, build, install and register a software package for the target system. A FreeBSD package is simply a port that has been precompiled, so you don't have to be afraid of typing "make" at the commandline. And these packages have utility programs along with them, like pkg_add, pkg_remove, pkg_info, etc. Dependencies are kept track of, including checking for individual libraries instead of monolithic packages. Very similar to Debian's method.

    So how does this fit into the Linux continuum? Well, right now there is a concerted effort to make a unified BSD ports system, instead of separate ones for each *BSD. There is no reason that Linux could not get involved so that there will be a linux-ports variant. Hell, there's no reason that it couldn't be a grand unified UNIX-PORTS! And there's no reason that deb and rpm packages can't be fit into the system as well. I keep hearing rumours that Slackware will go to a ports-style system, and I hope they do.

    If you're a potato or hamm head, and have always criticized the ports because it didn't have some minor feature, now is the time to get involved.
  • www.freebsd.org/ports
  • Let me clarify (to allay a few of the flames) that I think all of the things on my list should be doable while I'm deciding what to install, preferably without downloading the whole package. I know I could do most items by writing a script to download the new package, possibly unpack it into a temp directory, and poke around. But imagine if you had to do this just to read the description of the package. I'm proposing that all of the things I list be as readily available as the package description.
  • I am in favor of RPM's, and personally, I like to build from source. I try and use SRPMS as much as possible, but they have absolutely insane dependencies at times. Installing, and building the SRPMS for openssh for example, require that you have gnome stuff installed. Why, because it wants to build gnome-askpass... It might be that the guy who build the package was asleep at the wheel, but it is highly aggrevating to have to edit the spec file every time I want to build something from source. They have a tendency to built into big monolithic pile, where you have to have them all or your stuck. I was trying to build a router, that I wanted to be highly secure so I build all things from source that didn't come off my official red hat CD, and it ended up having gnome development libraries, which required X, among other things. Note, change of topic: I have used ports on FreeBSD, they are very nice, I have found that RPMFind can approximate the same thing. The issue with ports, is that they are all centrally controlled by one small group of people. If they decide to not add some new package you want, you can't have it. If I write a port, I would have to submit it to them for the next update of ports. I am not sure I really want my packages in someone elses hands. I like FreeBSD, I thing they do lots of good things. I just don't want all my packages in there basket. Kirby
  • Your answers are very superficial. I use Debian, so I know about all the things you mention. They all fall short.

    • security.debian.org does make it possible to install only security fixes on a stable system. That's an important but very limited case. It doesn't help if I replace security with any other criterion. Moreover, what if I'm running unstable? I often do this, but there are still times when I don't want to risk installing any upgrades but critical ones.

    • hold is a very blunt instrument. For example, if I install version 1 and put the package on hold, I will get alerted by deselect when version 2 comes out (ie, it will appear in the "Updated" section"), but if I choose not to upgrade, I won't get any new indication when version 3 comes out. I have to remember that version 2 was the last version I considered.

    • searchable package descriptions are nice, and I use apt-cache search all the time, but I often miss packages by choosing the wrong search strings. A more structured way of describing package functionality would be better.

    • Debian stable does not contain only well-integrated, well-tested packages. If you think so, you're either horribly biased or smoking something. Think about GNOME in slink. Or the many orphaned packages in any stable release. Or all the random little programs used by almost nobody and packaged by novice Debian developers.

  • Forget about dselect. I haven't used dselect in almost a year. The good package management front end to Debian is aptitude [debian.org].

    The version in Potato is not up to date, though; grab the version from unstable, and you'll be in heaven.

Related Links Top of the: day, week, month.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (4) How many times do we have to tell you, "No prior art!"

Working...