Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Ximian

Ximian Testing Red Carpet Daemon 80

rainmanjag writes "GNOMEdesktop.org noted a new page on Ximian's site announcing the testing release of Red Carpet Daemon which would allow administrators to do automatic software updates on workstations within the enterprise. You can also get a command line copy of Red Carpet." Hopefully this works out better than the time I cronned apt-get upgrade under Debian's unstable tree. Whoops.
This discussion has been archived. No new comments can be posted.

Ximian Testing Red Carpet Daemon

Comments Filter:
  • Unless you ever want straight up Gnome again. Then, good luck removing Ximian Gnome! ;) Although Ximian Gnome was really slick!
  • Is a daemon that would work as a server for evolution ala exchange. Now if I could get exchange without Windows and Evolution on every desktop (even the ones with windows) I would buy.

    I think it is great that they are trying, but really, how many companies have linux on every desktop? I can't see taking our sales/marketing/support staff off Windows.
  • autoupdate (Score:3, Informative)

    by guacamole ( 24270 ) on Saturday August 31, 2002 @09:35AM (#4175996)
    For the last eight months, we have been using autoupdate [univie.ac.at] at our site to keep about 50 RedHat Linux boxes up-to-date. It seems to work pretty well. Though, this red carpet stuff looks pretty interesting too.


    • Thank you for the tips about autoupdate.

      I want to know if it can be used for installation / configuration for new machines, as well as update ?

      I have to set up hundred of desktops in Linux, and most of the configurations are the same. The desktops are not networked, so I have to finish the job manually.

      I am looking for a way to automate the process, or at least a way to lesson the task.

      I am thinking of setting up a machine, using a utility that works like "Norton Ghost" (if there is such a utility that runs under Linux, there'd be super-nice !), where I can "copy" the whole content of the harddisk (partitions and all) into a CD-R.

      Using that CD-R, I can go to other empty, yet-to-be-configured, desktops, boot up the machine on the CD-R, and then "replicate" the set-up (partitions and all) on that machine, automatically.

      That will at least takes care of the set-up part.

      I know autoupdate does the updating, so I am wondering if autoupdate does the "setting up" too?

      • Use Mondo Rescue [microwerks.net]. It will backup a linux system and restore to different size partitions etc if needed.
      • I am thinking of setting up a machine, using a utility that works like "Norton Ghost" (if there is such a utility that runs under Linux, there'd be super-nice !), where I can "copy" the whole content of the harddisk (partitions and all) into a CD-R.

        Norton Ghost works with linux. Where i work we have a lab full of dual boot (Win2k/RH linux) machines that are installed using ghost.

        And since this is open source, it wouldnt be too difficult to roll your own cd that boots up, autoconfigures the machine, and installs OS. It will help if all the machines are the same configuration.
        If virii writers can write apps that automatically destroy any given machine, can't we write apps that automatically create any given machine?

        • I dont know about the normal version, but Symantec's Ghost Enterprise version allows you to install a small console on each machine, allowing you to simply reboot and reimage the machine. The downside is that you have to purchase a license for each client machine you want to run the console on. IMHO the price is reasonable, if you are talking about an enterprise size installation.

          And since Ghost supports Linux, you could use it to reimage your linux boxen as well.

          More on-topic, I just installed apt4rpm [sourceforge.net] the other day, and it is hella cool. I always thought apt was the best feature of Debian, but I have been using RedHat for a while and feel familiar with it. There are server packages available, so you can run your own repository internally. I am preparing to do so for my company. We are primarily a RedHat shop, so this tool should prove invaluable, or at least will save me having to run around with 20 Linux CD-Rs =). I'd love to pay RedHat $20 or whatever a month per machine, but, um, no.
      • If the particular flavor of linux that you are running is Red Hat, their kickstart method of install is one way to achieve this. For a non-networked situation, a custom boot/install CD with the kickstart configuration on it is probably best.
    • don't we all? i wrote a bit of perl that takes a list of sites, sorts them by least ping time, and then updates a tree of redhat updates, everything from redhat 6,2 to 7,3, moves (most of the time correctly) the old RPMs to an unused directory, and then i use another perl script ( run from the client-side) to stepp through and update the appropriate RPMs. by arch and number of processors ( SMP or not ). not a perfect system, and we're looking at using Ximian Corporate Connect ( which is pretty slick, even if it's not fully functional ), but it's a matter of money. well, gotta go, the cat just tipped over the trashcan i was vomiting into, and is now attempting to lick it up!
  • ...under unstable and it has only let me down once or twice. Luckily it never hosed my machine beyond repair.


  • What Ximian does is indeed good.

    I have to configure hundreds of desktops in Linux, and many of the time the configurations are the same.

    It has been a very time consuming task, and I do hope that there is a better way to do so than the manual procedure I have been doing.

    I do know that if the desktops are linked (networked) I can automate the process, but unfortunately, they are NOT linked.

    So I am searching for a way to configure ONE desktop, then copy the ENTIRE hard disk content on CD-R (or similar mass storage devices) and then go to another (empty, not yet configured) desktop, boot up with the CD-R and then automate the process.

    I hope the Ximian new product will do that.

    And there's a BUT.

    The BUT is, Ximian only runs on Gnome. Many of the time the configuration I do is to get Linux runs on Chinese, hence, I have to use KDE.

    Is there a way to mass configure desktops with KDE?

    • A system imaging tool like Norton Ghost may be what you're looking for.
    • Ximian Red Carpet comes as a statically linked binary, so you don't even need gtk for it. All you need is X. I've been using Red Carpet for a while and it makes security and software upgrades a breeze. It totally gets rid of the dependency hell that comes with RPM. I do not use Ximian's Gnome, so I'm only subscribed to the Redhat 7.3 channel.
      With rc and rcd, you will be able to do automated updates from the command line. Just what redhat and redhat like distributions needed as answer for apt-get.
    • Would this work? Install, then configure on one machine. Pull the hard drive out and put it in another box with a CDR. Make a bootable CD where init is replaced with a small C/C++/perl/python/whatever app. The app runs parted then mkfs to partition/format hda. Then it simply copies the image onto the drive, mounts it, then runs lilo. If you only have a few types of video cards you can even try to get X working using lspci and a bit of grep magic.

      I haven't tried it, but I have copied my / from on HD to the next three times now and it doesn't seem to care.

      On another note... Doesn't RH have a unmanned install mode? I'm a debian guy myself, but I seem to remember RH had some tools to do auto-installs.
    • Yes. this can be done, and it's not even hard - we (computerbank, http://www.computerbank.org.au) do this with some custom scripts we wrote to automagically partition, untar, detect hardware, and configure X and audio. It works remarkably well for large rollouts... I'd imagine that there is a better solution than ours, but send me email if you are interested (jaymz@dspvideo.com, but audio instead of video :-p)
    • I have to configure hundreds of desktops in Linux, and many of the time the configurations are the same.

      You should checkout radmind [radmind.org], it's a combination tripwire/software update tool. It's being used all over the place to deploy large Mac OS X clusters. It runs on Linux, Solaris, and *BSD.

      :w

    • What you're looking for is mondo [microwerks.net]. We setup machines in a lab on campus using it(29 machines in all). We basically create a standard machine, and then have it generate some cds. You drop those CDs in the new machine, make sure that the partitions it creates are all correct, and have it restore the image from the old machine. Make changes as necessary to the new machine(hostname, address,etc) and you're done. It sure beats dd, gzip and NFS which we did before we found mondo :)
      -Aaron
  • Sounds like an invitation to find the vulnerability.
    • Sounds like a clueless poster.
      • Not so. The system only has to be compromised on the rc side to have a network-wide effect.

        They are leaving a hole in the system that says to the world "Your wish is my command". *Any* rc client will be able to access *all* rcd servers on the network.

        You may believe the system's airtight, but people who hold that belief are often proved wrong.
      • by marm ( 144733 ) on Saturday August 31, 2002 @10:36AM (#4176192)

        Sounds like a clueless poster.

        No, not at all. This is a very genuine concern. Personally, I think having a separate daemon to do this job is a very dumb idea. Existing, well tested tools like ssh and cron could do this, and the less new, untested code that runs on the network, the better for security.

        For a start, it's going to have a port open on the network in order for a master computer to contact it and tell it to update. This in itself is a major security risk - any open port is. Now also remember that, because it will be updating packages system-wide, part of the update process is going to have to run as root - I hope at least the network-facing daemon doesn't. If it does - instant remote root when the first stack-smashing or format string exploit comes along - and it will, have no doubt about that. Even if the daemon itself has limited privileges, it is going to have to talk to something setuid root in order to perform the package upgrades, so a remote root shell is only two exploits away, one for the daemon and then another for the setuid program that does the updating.

        Remember, this is new code, untested in the wild for any length of time, unexamined yet by anyone external. ssh would do the job fine instead, and, although ssh has had security problems, it has had a lot of pounding on it for a long time now. The Red Carpet daemon - hasn't.

        In short, wtf aren't Ximian using ssh instead of their own potentially hokey code?

        Second, there is a big problem with automatic updating generally. If I can get root on a machine within a network - or in fact, just plug my laptop into this network - then with a bit of spoofing trickery I can convince any other machine within that network that I am the update server, and next time they update, they will download packages from me, which I could easily trojan - and then I've got control of every single box on the network, and almost all the work was done for me. Signed packages are supposed to alleviate this problem, but past incidents with both OpenSSL and ssh suggest that certificate checking is not always up to scratch, and there may still be other ways to convince the Red Carpet daemon to install unsigned packages. If you have an insecure wireless network attached, then you're going to have even larger problems as an attacker who wants to get in this way doesn't even have to be physically connected to your network.

        This sounds like a very convenient way to automatically update software - although nothing that ssh/apt doesn't already offer - but it also sounds like a potentially gaping security hole that will bite people hard in the future.

        • No, not at all. This is a very genuine concern. Personally, I think having a separate daemon to do this job is a very dumb idea. Existing, well tested tools like ssh and cron could do this, and the less new, untested code that runs on the network, the better for security.

          It would be better to write code that will be as small as possible, which is written with the current security practices in mind. Most of the exploits which have plagued UN*X have been from old code like sendmail. That said, Ximian doesn't have the best security record. Their installation script consisted of running code downloaded over HTTP (!!) through a root shell.

    • It's trivial to turn off listening to the remote port in rcd. So if you want to be totally safe, just shut it off, like any other service. You can still use it locally, and still have it pull down updates from the server (automatically, even!)
  • by wowbagger ( 69688 ) on Saturday August 31, 2002 @09:58AM (#4176064) Homepage Journal
    I'm not sure I'd trust Ximian to auto-update my system - while they try pretty hard, I've had just too many dependancy conflicts updating RPMS from them to feel really warm and fuzzy about having it happen automatically.

    Also, one thing I like about RedHat's up2date vs. RedCarpet is that I can tell up2date to leave my damn X server alone!. Neither RedHat 7.2 nor Ximian have XFree 4.2, but at least I can tell up2date "hands off any package with XFree in the title" and not worry about it downgrading me to 4.1. Every time I run RedCarpet I have to tell it "No, I DON'T want you updating my X server, yes I know this is a "security release", but I don't need it!"

    Unless redcarpetd has the ability to prevent upgrades on selected packages I wouldn't trust it.

    And until the packages get vetted better for conflicts I would be careful. That's what ALL RPM based distro's need - a standard base of packages and libraries that released packages are not allowed to deviate from. Any RPM that call for "foo-1.4.2-unreleased-unstable-pl1.4-thursday.rpm" should be uncerimoniously bounced from any stable release. That's one area I will give the Debian folks credit - they maintain their packages.
    • RCD, contrary to the story blurb, does not currently allow centralized updating of machines. It does provide a tool, rc, that allows updating of packages from the command line.

      In order to update packages the 'rc' command must be run with the proper options. If it is going to install/remove packages, it will detail what actions it is about to take and wait for user input to approve the actions.

      So no package will get changed unless explicit approval is given. This was true with Red Carpet GUI version, and is still true with RCD.
    • I'm not sure I'd trust Ximian to auto-update my system - while they try pretty hard, I've had just too many dependancy conflicts updating RPMS from them to feel really warm and fuzzy about having it happen automatically.

      The automatic updating is totally optional, and it will never "force" an update, so dependency problems can be resolved by the user.

      Unless redcarpetd has the ability to prevent upgrades on selected packages I wouldn't trust it.

      It does, through an .rcexclude file. It's not ideal (yet), but it's a start.

      And until the packages get vetted better for conflicts I would be careful.

      If you see dependency problems in our packages, it's almost certainly a bug. File it: http://bugzilla.ximian.com [ximian.com].

    • I have one machine still on up2date and one on Red Carpet. The Red Carpet updater has been going for all of this year and in the beginning, it was dependency hell (reminded me of DLLs under Win). However since about Easter, it has been very stable. The only issue is if I trigger so many dependencies that /var is filled up with incomming rpms.

      If you don't want beta, just don't subscribe to the beta releases. The other stuff seems fine. This particular system is an RH7.1ish 2.4.19 kernel with Ximian Gnome.

  • I have been using the the debian cron-apt package
    for some time now.

    Knud
  • I really liked the concept of the Ximian desktop and their easy installer and what not. I really appealed to me because I was using the *Solaris* distro that Ximian generates.

    However, after a few magic rides on the Red Carpet, I decided that I wasn't all that trusting of full service. Everything worked great until I started doing the red carpet updates. Then Red Carpet would break. The icons on my desktop would break. The Evolution mailer would break.

    I stopped doing updates in order to preserve something which passes as a workstation. Mind you, my case probably is extreme (but only because I tried to use Ximian for a reliable Solaris desktop), but I hope it illustrates a point.

    Care to be responsible for a slew of desktops when you don't do your own quality control and bless updates which are placed onto systems you support?
    • RCD gives the administrator full control over their system. It does not require Ximian Desktop to be installed, nor will it auto-update the Desktop. RCD can be used to simply install vendor updates on to servers, if that is what you are interested in.

      Granted, some operating systems handle updates differently than others, but, using Red Hat Linux 7.3 as an example, this month alone there have 58 packages released as errata. RCD will tell you which of these apply to your system, and can, optionally, install them for you. However it will not ever install something unless a user directly tells it to.

      As the original post says "Can you be responsible for a slew of deesktops when you don't ... bless updates which are placed on to systems?" Of course not, and this is exactly why RCD requires explicit direction to make changes to the system software.
    • Everything worked great until I started doing the red carpet updates. Then Red Carpet would break. The icons on my desktop would break. The Evolution mailer would break.

      I have to somewhat agree.

      Occasionally, due to a bug, or more often due to me running out of HD space, the install for a core RPM like 'red-carpet' or 'rpm' would die. Then I'd be stuck without red-carpet or rpm, and would have to restore these programms by grabbing a bootstrap install from Ximian or elsewhere.

      But there is a reson for this: Ximian only has one person in charge of repackaging/testing the Solaris RPMs. This is in large part due to the fact that Solaris users make up a very small percentage of the Ximian and Gnome market (Heck, less then 1% of the visitors to Gnomedesktop.com use Solaris [gnomedesktop.com])

      This may improve as Gnome2.0 matures, after Ximian reduces their support for Gnome1.4 in favor of Gnome2.x, and after Sun releases their Gnome2.0 distro.
  • Don't cron apt! (Score:1, Informative)

    by Anonymous Coward
    Hopefully this works out better than the time I cronned apt-get upgrade under Debian's unstable tree. Whoops.

    Debian has three trees; stable, testing and unstable.

    When using the stable tree, instead of using cron, subscribe to debian-security-announce and only update when a package with a security problem needs updating.

    Update scripts also often need to ask you questions and cron doesn't allow that - and testing and unstable sometimes break on update, because they are not, well, as stable - they need to be watched.

    Offhand comments like the above make debian seem flakey when it is far easier to maintain and stable than red hat, because debian is built robustly from the ground up.

    How hard is it to check mail and apt-get update; apt-get upgrade when needed?

    Anyway ... :)
  • The best of both worlds Apt with the huge set of packages available for RPM. http://apt-rpm.tuxfamily.org/ I have been using this for a while to keep about 50 machines upto date.. I also have it set up with an "extras" hierarchy so that I can run newer versions of stuff like mozilla...
  • Hopefully this works out better than the time I cronned apt-get upgrade under Debian's unstable tree

    Yeah, no shit. When I FIRST started using Debian, I did pretty much the same thing, because I didn't have cable yet and wanted the downloads to go off while I was out (out being sporadic, I had a script that I'd fire off as I left).

    One time I came home and had no X, no e-mail, about half of the programming tools I needed for class, and no cache of packages (disks were smaller then), so I also was SOL on any quick way to reinstall it.
  • rc? Come on, folks! We already have two confusing completely different programs by that name:

    Couldn't you have been just a little more creative in coming up with a name? Geez. Now we get:

    ---So how do i do this Red Carpet update thing again?

    ---rc channels to list the available channels.
    [pause]
    ---It says channels: No such file or directory
    ---Huh? That's weird...
    Hurrah for Xidiot.
  • what does this red-carpet thing do that something
    like the following doesn't do:

    for i in host1 host2 host3 ; do
    ssh $i "apt-get update ; apt-get install [package...]"
    done

    a useful variant (for more complicated upgrades)
    is to write a sh script to do the upgrade, scp it
    to the remote machines and run it with ssh. this
    script can install/upgrade the packages, run perl
    or whatever to customise config files, and do
    anything else that is needed to ensure that the
    upgrade goes smoothly.

    i've used variations of the above script to
    install or upgrade single packages and even full
    system upgrades on dozens of remotely-located
    debian boxes in one go (mostly internet gateways,
    firewalls, proxy servers etc).

    for rpm-based systems it would be trivial to
    modify the script so that it used scp to copy the
    require .rpm packages to the machine and then used
    ssh to run rpm for the install.

    all i see is another unneccessary daemon which
    gives remote root privileges which hasn't had
    anywhere near the security testing of ssh.

    IMO, anyone who isn't capable of writing trivial
    scripts like the above has no business pretending
    to be a sysadmin and shouldn't be upgrading even
    one machine, let alone batches of them.

    • > for i in host1 host2 host3 ; do
      > ssh $i "apt-get update ; apt-get install [package...]"
      > done

      Been there, done that.
      You are badly mistaken if you think a simple script like this is enough to keep a large site up to date. Imagine that you have nearly 300 hosts. Imagine that although you're trying to keep the host database up-to-date there it will always not fully correspond to reality. Finally for this command to complete all of those have to be up. What if a machine crashed? What if a user shut it down? What if I machine down for whatever reason? And how long will you have to wait until this command completes? Pushing updates and such does not scale well beyond a couple of dozen boxes. No matter what toos you use for system administration, it is much better to use the pull model (where clients request updates and other configuration changes) on their own from the server instead of trying to run some command on all of them.
      • > Been there, done that.

        so have i. it works.

        > You are badly mistaken if you think a simple
        > script like this is enough to keep a large site
        > up to date.

        no, i'm not. this isn't just theoretical, this
        is what i do to maintain a large network of
        (currently) dozens of machines. in the past, i
        have used similar techniques to maintain networks
        of hundreds of machines. it works.

        > Imagine that although you're trying to keep the
        > host database up-to-date there it will always
        > not fully correspond to reality.

        if i was so slack that i couldn't even maintain
        a simple database like that then i'd deserve to
        be sacked.

        if nothing else, i'd be maintaining the DNS
        records that point to all those machines.

        > Finally for this command to complete all of
        > those have to be up. What if a machine crashed?

        you use a semaphore of some sort (e.g. touch a
        file where the filename = hostname) to indicate
        whether the upgrade has completed or not. then
        you just run the script again when you've got the
        crashed machine(s) back up again. no problem.

        and since we're talking about dozens or hundreds
        of machines here, tee the output of the script
        so that you can leave it running overnight and
        review the log in the morning. stuff like this
        should be obvious.

        frankly, you don't know what you're talking about.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...