Become a fan of Slashdot on Facebook


Forgot your password?

How Do You Manage a Product Based on Linux? 72

Ryan writes "Following my advice, my company has decided to base it's new appliance on Linux. So far, it's worked out great. Linux gave us a huge jumpstart on development because of it's open nature and the information we've garnered from public mailing lists. We've added software, modified startup files, and have built our own kernel. Now the question is: How do you manage it all? Do you put it all in CVS or Subversion? Do you use the distro's packaging system (we're using Debian)? What does your build system look like?"
This discussion has been archived. No new comments can be posted.

How Do You Manage a Product Based on Linux?

Comments Filter:
  • by Anonymous MadCoe ( 613739 ) <> on Friday September 08, 2006 @09:00AM (#16065481) Homepage
    You really should be stopping and look at what you are doing. How you want to manage it should be part of the strategy, and actually should have been part of deciding to use Linux (not in detail, but strategically).

    So my advice, hold on, sit down and look at what you expect to produce and what you would need to get there. From there you can find out what you would need.

    You will probably run into some issues, but that's just what happens, there is no ideal situation.

    • Insightful my foot! (Score:2, Interesting)

      by Monchanger ( 637670 )

      So my advice, hold on, sit down and look at what you expect to produce and what you would need to get there ... You will probably run into some issues...

      What the hell kind of advice is that?

      Why didn't you just say "STFU and RTFM!!!!!!!!1" ('1' intended) and get back to your <sarcasm>thrilling</sarcasm> life? People come here, ask a serious question that's troubling them, and once they make it past the editorial interest filter, they get this bull. This isn't just one more stupid forum, thi

      • Re: (Score:2, Funny)

        by Anonymous Coward
        This isn't just one more stupid forum

        That's right! This is THE stupid forum!

      • Re: (Score:3, Funny)

        by Wudbaer ( 48473 )
        The problem is that the original asker is trying to understand the process when halfway through without apparently having spent too much thought on the basics of what he or she is doing. "I am trying to travel from London to New York. Now sitting here at a crossroads in Hongkong I would like to know if I should have turned left in Dover or right. Do you think I should aquire a map ?"

        So trouble seems likely, I'm afraid.
      • It is actually the only good advice...

        A strategical choice has been made (we are going to use Linux). This means a lot of other things (among which he development process) are impacted.

        If you make a strategical choice you need to be aware of the overall impact (think about, processes, legal, financial, implied limitations etc.). Once someone finds himself in the situation described the best thing to do is to sit down and redo the strategical analysis.

        This is also why it s funny to see people offering practi
  • by Anonymous Coward on Friday September 08, 2006 @09:08AM (#16065512)
    You're building a whole new appliance, but your software engineers don't know how to manage a development process?

    I mean... I'm not being nasty here, but you're in trouble, and I don't even know where anybody could start to give you advice. It would be one thing if you were looking for guidance on a regular small scale software project, but if you're jumping in feet first with a whole new large scale application and no idea how to guide the process...
    • No points today, and I loathe "mod parent up" postings, but that's the perfect response.

      Nobody is going to be able to provide any reasonable advice, other than perhaps for the submitter to hire a consultant (or employee) that has proven experience in large scale software development projects.

      To submitter Ryan: This is highly non-trivial; you don't seem to have any idea how very much you're missing. If you don't know what you don't know, you need outside help. And because you've already started down th
      • And the funny thing is...
        Lots of people are providing "advice", which makes one wonder, would they have done a better job???
      • by torpor ( 458 ) <<ibisum> <at> <>> on Friday September 08, 2006 @11:41AM (#16066655) Homepage Journal
        What part of "How do You Manage a Product Based on Linux?" do you not understand?

        He's not asking for help .. he's interested in the ways /.'ers are maintaining their linux-based products, perhaps (naively) hoping that the peanut gallery might provide an interesting result. This does not necessarily mean he wants help with his lame system; read closely, and you might realize that Ryan seems quite happy with his approach so far .. but this is still an interesting topic worth objective attention. Its not a screaming/crying/spoiled-brat cry for help that some of the similarly inclined responses have implied, anyway ..

        Me, I've been building linux-based systems for my own use since the days of the minix-list (and before that, RISCOS distimages). My current approach is quite simple, old-fashioned, but workable nevertheless. I simply apply the following general guide-lines for sysbuilding: complete source-control (using SVN/whatever-the-package-maintainer-uses), avoid cross-compiling, build everything on-board, one Makefile to tie together whatever components are required (linux-kernel/base-image/sysbins/libs/my_app), 'cscope -R' at the root tree when something needs to be worked out, and set it all up so that you can just type 'make' and watch the bootable .img form .. Fortunately the more you do this, the less you need to worry about package maintenance, but of course if the 'final deliverable' is a simple, plain sysimage containing all software onboard required for your embedded app, then package maintenance isn't such an issue. Its kind of fun to have a "single-image deliverable" too ..

        • What part of "How do You Manage a Product Based on Linux?" do you not understand?

          None, actually; I understand it quite well, thank you.

          He's not asking for help .. he's interested in the ways /.'ers are maintaining their linux-based products ...

          Agreed. And I put it to you (and Ryan) that this is a fundamentally flawed approach.

          ... perhaps (naively) hoping that the peanut gallery might provide an interesting result.

          It might indeed be interesting, but it's almost certainly not going to solve his
        • complete source-control (using SVN/whatever-the-package-maintainer-uses),

          agreed. use the best SCM possible. We use perforce and it works great.

          avoid cross-compiling, build everything on-board,

          What? are you mascotistic?? You'd have to be insane to build anything on the underpowered boards. Use crosstool or buildroot and use distcc or icecream to distribute the build across as many servers as possible. Anything else is just begging to be killed by boredom.

          one Makefile to tie together whatever components are

          • by torpor ( 458 )
            What? are you mascotistic??

            Well .. I know this may seem 'odd', but my point of view is that if you can't put up with building your system on-board, you're not building your system, or exercising your pre-production hardware, strongly enough ..
            You'd have to be insane to build anything on the underpowered boards. Use crosstool or buildroot and use distcc or icecream to distribute the build across as many servers as possible. Anything else is just begging to be killed by boredom.
            .. and I guess I should quali
            • You don't really do embedded work thinking all hardware is perfect, do you? ;)

              God no. We've been suffering here for a year dealing with all of the crap that our electronics subcontractors produced. (not to mention really bad device drivers...) If I hadn't used qtopia and their qvfb tool, I'd be 9 months behind schedule. as is, I'm pretty happy with the build system we have here. However, I don't like having to drop into the unknown guts of a arm process without being able to debug it on the pc first. runnin

    • by arivanov ( 12034 ) on Friday September 08, 2006 @10:54AM (#16066304) Homepage
      A round of applause...

      While I understand your spleen and I even applaud it, there are a few things which you are missing.

      In many settings the initial people with the idea are not capable of software process management (and nearly always incapable of support process management). On top of that the people who they initially hire are usually of the "implement at any cost breaking all rules". That works great up to a prototype and sometimes even slightly later. In fact this "break all rules" culture is a model which most successfull startups in the industry have followed.

      After that, once the prototype is up and working and the initial euforia has settled in, the company comes to a point where it needs to grow up. The initial people who have ideas and who are of the "implement at any cost" variety must either move into positions where they do not prolong the company growing pains or leave. The company also needs to hire people (or less likely - find within its ranks) who are capable of long term software development process management. These are the people who must become the team leaders and managers in order for the company to be successful.

      Unfortunately, very few companies make it through this stage. Most promote the "implement by any means necessary" or the "initial idea" people into positions where they cannot cope as their mentality is incompatible with the actual requirements. They become increasingly frustrated with the mere fact that there is a process, break it all the times and explain to people who follow it that they are tails that wag the dog. In addition to that they blow any long term planning to hell and gone all the time and push the company into an endless spiral of firefighting crisis management. This all becomes a big mess and it all goes to hell sooner or later.

      Anyway, it is not so uncommon for a successful initial stage startup to have no culture of software development process and especially customer support process. In fact that is to be expected, as to get through the initial hurdles you sometimes need to walk on the dead bodies of the rules and processes that have been broken.

      Frankly, the ask-slashdotter should be applauded that they have realised that they lack this process. Now he will definitely do not like the answer to his question. It is very simple: hire someone who does and swallow the fact that you will hate him to the point where some of the veterans may have to leave. Tools like subversion, clearcase, CVS, MKS, Bugzilla do not make a process. They implement a process defined by a human. What is needed is the person who has defined the process to have a long term view as well as a view of how the process fits with support, business and the rest of the company. If he does not - no tool will help. There is nothing worse then short-termism in process definition and tool selection forced by short-termism. It will come to bite you in the arse again, and again, and again.
      • Re: (Score:2, Informative)

        by Gr8Apes ( 679165 )
        Well written.

        To the OP, here's what I've personally seen: a large company with no processes in place doing ad hoc builds on local dev's machines with random tools going straight to production. This was replaced with a build machine and ant with much reluctance, as several developers had their own "job security" babies, as they were mysteriously the only ones that could build certain components, at least until the build machine became the only source of move to production code. However, the business went fro
      • Great insight into the startup environment and the issues with process development. You've obviously worked with startups in your career!
  • if the program only has one developer on it, simply tarring it up every once in a while has been sufficient. However, once you get another developer something like CVS would be very highly suggested so each programmer can be sure to keep each other's versions up to date.
    • Re: (Score:3, Insightful)

      by LiENUS ( 207736 )
      Actually even with 1 developer SVN or CVS are huge benefits, what if that developer introduces a bug? with subversion you can go through and look to see when that bug was introduced and roll it back. You'd be hard pressed to roll it back with just tarballs.
    • by krumms ( 613921 ) on Friday September 08, 2006 @09:34AM (#16065660) Journal

      No no no no no. :)

      Whether you're working in a team or working by yourself: Use Subversion Anyway. Or svk. Or Darcs. Any reputable revision control system will kick the pants out of any ad-hoc solution you come up with. Revision control should be automatic and easy. The value of being able to easily merge changesets alone is reason enough for any non-trivial project. Keeping track of branches for experimental/delicate changes, tagging releases, LOG MESSAGES for all your changes - all of these things, use them, learn to love them. It's a bitch to get in the habit, but when you do it's absolutely worth it.

      It's taken me over seven years to truly learn the worth of version control. These days I'd dare not live without it. It really is that good. Honest!

      • by LiENUS ( 207736 )
        It's taken me over seven years to truly learn the worth of version control. These days I'd dare not live without it. It really is that good. Honest!
        It only took me about 7 months to learn the worth of version control. 10k lines of code with 1 developer, I couldnt live without subversion.
      • Re: (Score:3, Insightful)

        by idontgno ( 624372 )

        What you're really saying, is "use version control discipline", hypothetically independent of the tool. That's the real point, one I've seen made elsewhere in this topic. The submitter is asking for tools, but really I think he's asking for process advice.

        The tool won't make you do the smart things you talk about--tagging, change tracking, etc. Every tool can be circumvented, pencil-whipped, or otherwise reduced to "going through the motions".

        The real advice here: come up with your project management goals

        • Having the process without the software is much, much harder.

          I've been using Mercurial [] to manage my own projects. I figure I can always figure out how to expand it later if I get another developer. I wanted something simple, lightweight, and hackable, yet still with all the features of, say, SVN or CVS, even making the commandline look similar, so that people coming from other systems aren't immediately lost. I think this is the best we've got as far as that goes.
      • It's taken me over seven years to truly learn the worth of version control. These days I'd dare not live without it. It really is that good. Honest!

        When I started at my current job, we had 3 coders. An overseas idiot who's changes frequently broke things, my boss, and myself.

        Originally, we had no version control. It was my duty to manually merge any changes any of us made using WinDiff.

        I learned the worth of version control in about 7 minutes ;)

        Being able to see who did what, when, and why is also nice.
  • Absolutely (Score:2, Insightful)

    by Anonymous Coward
    You should definitely put your code under some kind of revision control. We currently use cvs but are looking at switching to git or mercurial. One thing thats nice about cvs is the import feature that lets you bring in new copies of the open source programs you've modified and migrate your changes forward into the new copy. With regards to the packaging, its definitely worth the effort to put them into the distro's packaging system. We use debian as well and its nice to be able to have a repository tha
  • Version Control (Score:4, Insightful)

    by bigattichouse ( 527527 ) on Friday September 08, 2006 @09:13AM (#16065541) Homepage
    Your build should be in some sort of Versioning system (CVS, whatever). SOMETHING that allows you to cover your butt with you `rm` that folder and realize you just tanked the whole thing. Somehow you should be able to rebuild any version of your project back to day 1.
  • Reproducability (Score:5, Informative)

    by danpat ( 119101 ) on Friday September 08, 2006 @09:23AM (#16065598) Homepage
    If you're releasing a product to the public, the one word you need to keep in the back of your mind at all times is "reproducability".
    Can you, at any point in the future, reproduce whatever version it is that customer X is having trouble with?

    There are many ways to do this, ranging from taking complete snapshots of each "build" (requires lots of space, but fast to reproduce), to keeping a short list of the Debian packages installed (not much space, but slower to reproduce). It's a classic space-vs-time tradeoff.

    I'd suggest you attempt to automate the system build as much as you can. Use virtualization tools like VMWare to help perform "builds" of your OS images. Most Linux distros have automated install processes. RedHat has "kickstart", Debian has "fai" ( At the minimum, you should version control the script you use to build your vmaware images, and the configuration script for fai/kickstart. This should let you re-build everything at some later stage.

    When it comes to customising Debian systems, customised Debian packages are the way to go. If you're adding new files, package them up and deploy them as part of the automated install. If you're customising existing packages, edit their source and rebuild them with
    customised version numbers, and list those versions of the packages in your fai script. You'll need to go through the whole version control process with each customised package too. (i.e. check it's source into a version control tool, tag it, apply your changes, tag it again, then build your .deb file). You can provide "answers" files for debconf so that no questions are asked during installation, and you can tweak various settings as you go along. If you've taken the VMWare approach, you can always login to the image and make final adjustments (just make sure they're scripted and version controlled) after the Debian install is complete.

    Do a search for "customized debian", there are quite a few people doing similar things already.

    Basically, make sure that the end product requires nothing more than a button push to produce. Anything less and you'll introduce the risk of someone forgetting to perform a step, or doing it wrong. That'll create a support nightmare down the track.

    If you can reproduce easily what your customer has, you can also easily make a minimally invasive fix for them. That'll make them happy :-)

    If you're looking for resources on this stuff in general, "configuration mangement" ( ment) is what you want to search for. Librarianship for software systems, kind of dry and boring, but oh-so-necessary.
    • Re: (Score:3, Interesting)

      by Mr. Jaggers ( 167308 )
      (to the parent)
      Agreed, reproducability is key. Not only for your customers, but just for releases in general (your QA guys love you if you can deliver builds to them and actually know which build you gave them by the time they're done testing it). Also, having stuff wrapped in deb (or, I suppose rpm) packages is really nice too, as it makes field-upgrades a snap (even on embedded appliances, provided you have a deliver mechanism), and again, your QA guys will love you if they don't have to wait for a whole
  • by KillerBob ( 217953 ) on Friday September 08, 2006 @09:25AM (#16065612)
    For distribution to the end-user, you will definitely want a package of some kind. I'm assuming that your end users won't be able to log in with a prompt, but may have some kind of web-based management, right? If you distribute your upgrades/patches as .deb packages (maybe renamed to .bin since that's what users have been conditionned to expect), then it makes things a whole lot easier... among other things, it would facilitate downloading the upgrade from a location other than where the product is: not all users have Internet connections at home, even in this day and age. You may also want to look into implementing something like Slackpackages, since they ignore dependancy. (They're basically just a tarball... you can install them manually by unzipping/tarring the file from / and then looking in the /install directory and manually executing any scripts there)

    For actual development... you're gonna *need* to use SubVersion or CVS. Cover your ass. Also, not having it makes managing a project a royal pain in the ass.
    • I think he was creating an appliance, which may mean rolling your own distro, which realistically means forking someone else's. But even someone else's distro, you're going to want packages for distribution to the end-user. Only use version control for things you actually need to tweak, but as soon as you start to tweak them, try to use the same version control as upstream, and pull from their repository. And of course, for your own product, it doesn't much matter, but you want version control for yourse
  • by Ramses0 ( 63476 ) on Friday September 08, 2006 @09:26AM (#16065619)
    ...but if you're using Debian, I would highly recommend that you spend a quality week or two *READING* the wonderful documentation debian has [] and read / ask a few questions on their mailing lists.

    Once you understand the package-management system of the SOFTWARE YOU ARE BASING YOUR BUSINESS OFF OF, the answer to your question will become clear... nay- simple.

      - MyCompanySoftware-1_0.deb, MyCompanyKernel-1_0.deb, MyCompanyOtherStuff-1_0.deb

      - Generous use of depends, requires, conflicts, provides, etc. (or maybe up-rev eg: kernel-image-2.6.8-1.deb to kernel-image-2.6.8-1-MyCompany-1.deb, these are the things you can ask for advice on Debian / Ubuntu lists).

      - Source control all files used in any of those *.deb packages, and make an automated build process that can take your source-control tree and generate your packages at any time of the day or night.

      - Set up internal repositories, ie: [] .../testing/ .../nightly/, etc. and integrate that with your testing / deployment infrastucture. ...but most of all, please READ the documentation that Debian has put together. In few words, it allows mostly volunteers in their spare time to do exactly what you are trying to do and with a high degree of reliability. The documentation in Debian Policy is the first stop (and most likely the last) for almost anything you are trying to do. When you see the types of bug-reports that are filed against packages that go against policy (ie: incorrect depends, provides, etc) you will see what types of mistakes are possible, and you should seriously consider how to check the work that you've done to make it more likely that your work would not have the same types of bugs filed against it.

    • There are several reasons why by itself won't work...

      First, you're going to first need to remove any and all debconf options during install/update time. Additionally, if there are any packages left that don't use debconf, those will also need to be removed (I don't think there are anymore but I don't know for sure).

      Secondly, you're assuming that the configuration files for all the packages is perfect for the appliance. I doubt it.

      Thirdly, if I were a small startup company, I might want to think long and har
      • by Ramses0 ( 63476 )
        > There are several reasons why by itself won't work...

        Of course, IANAEDDM, and a slashbox is not enough space to fully explain good development practices.

        > ...Q's regarding configuration options...

        Or run debian in "no questions, defaults only" mode, or FAI [] or debconf answers, etc.

        > ... configuration files for all the packages is perfect for the appliance.

        Hrm... Appliance... Toaster... All the same... Toaster configurations... Probably not an insurmountable problem.

        > an appliance like this
  • Conary/RPath (Score:3, Interesting)

    by SWroclawski ( 95770 ) <> on Friday September 08, 2006 @09:45AM (#16065761) Homepage
    I don't work for them and haven't used their product much, but there's a company called R-Path (founded by former Red Hat early employees) that seems like its designed for "appliances" just like yours.

    The idea is that you build your platform on thier system, then you add your programs on top. The system merges updates from them and your system and places it onto the target system. The system they've built is called Conary. Conary itself is Free Software, but RPath sells services along with it that seem attractive.

    It looks very well put together and if I were looking at building an appliance, it's certainly something I'd be considering. []
  • Our situation (Score:5, Interesting)

    by Basje ( 26968 ) <> on Friday September 08, 2006 @09:49AM (#16065795) Homepage
    I work for an ASP. We've got a web application (built with Perl), running on Debian. At the moment we've about 15 servers (some dedicated to one large customer, some with over 50 customers) live, have 4 full time developers on the product, 15 people in total, and are quite succesful in our niche.

    This is the short version of how we do things.

    * We looked for an ISP where we rent the servers. They administer the servers (Debian stable), and install the perl modules, apache, etc. We don't have (want) root access to these machines: the ISP is responsible for the stability, and they do a good job. We ask them for changes/additional perl modules to be installed when needed. We've less than satisfactory experiences with several ISPs, make sure you find a good one.
    * For a repository we use cvs. This is flexible enough for our needs, and there was some experience with the app. If you haven't got any experience with cvs, also take a look at subversion or mercurial, as you could benefit from the improvements there.
    * As a cvs client we use eclipse. Great product, but unfortunately it is Java, and therefore slow. Some of the developers use the editor of eclipse, others use external editors (vi baby :)
    * Our work environment is mixed. We all have a windows workstation, but for the actual development we have a server with a dedicated debian VPS for each developer. We connect to the VPS (which is hosted on our lan, and not accessible externally) through ssh, samba and x. The VPS are UML based, but nowadays when setting things up, we'd probably use Xen. The advantage of using VPSs is that it's easy to set up a clean developement/test environment.
    * Have a release cycle, and try to stick with it. Most bugs are introduced when improperly tested code is implemented on live servers. Never edit directly on a live machine.

    Our current shortcomings (i.e. pitfalls):
    * Hardly any automated testing, and no formal testing procedures. Testing the application takes a lot of work, so it is often skimped. This is a risk, and introduced bugs are occasionnally missed.
    * The release policy is not always honored due to deadlines. This puts a strain on the organisation, because, as noted above, it needs to be tested manually. This is when testing is skipped most of the times, and most bugs are introduced. It's a commercial tradeoff: let a customer wait, or take the risk. Depending on who and when you ask you get different answers.
    • Hardly any automated testing [...]

      I understand that if you're creating mostly screens where people enter data. For other code, unit testing in Perl is dead simple. Create a subdirectory "tests" and create the test files in there. It's customary that they end with the extension ".t". The test could look like the following:

      #!/usr/bin/perl -w

      use Test::More tests => 2; # Increase the number of tests here

      use_ok('mymodule'); # The module we're going to test, can it be used?
      ok(1 == 1, "Test OK");

    • * As a cvs client we use eclipse. Great product, but unfortunately it is Java, and therefore slow. Some of the developers use the editor of eclipse, others use external editors (vi baby :)

      If you don't need all the java editing, compiling, debugging etc goodies of eclipse, it is definitely overkill to use it just as cvs client. Since you mention all developers are using microsoft windows I would suggest the free TortoiseCvs as an extremely nice CVS client. Just check it out and you will almost certainly l

  • use debian packaging (Score:3, Informative)

    by coyote-san ( 38515 ) on Friday September 08, 2006 @09:54AM (#16065839)
    I would definitely put everything you do into Debian packages -- nothing should be done on testing and production systems by hand and the package manager provides a known good framework. There's a bit of a learning curve on how to produce Debian packages, but I believe there are some 'hardening' packages that can be used as models for how to handle the type of sysadmin tasks you're looking at.

    You're using make-kpkg to build your kernel, of course, so it's already kicking out packages for your locally-built kernels. ... you are using make-kpkg, right?

    I have to agree with the others that the fact that you're asking about version control tools is scary. That's something that should have been decided a long time ago.
  • Don't forgt to build tests for each bug you fix (or at least for those feasible to do so) and run them in regression testing which should be part of each build prosess.

    Not to mention installation testing on all supported platforms and certify your product based on the versions of the various Linux distros tested.
  • by anomaly ( 15035 ) <> on Friday September 08, 2006 @10:31AM (#16066088)
    One important point not yet raised is that you need to control the platform for your appliance. Every customer should be on a release of your code, tested and deployed on a release of the platform - hardware as well as OS configuration.

    Security and features patches are helpful, but can also bring complexity and confusion when it comes to troubleshooting.

    Your best bet is to tweak a build of the OS (shut down unneeded services, automatic updates, etc) then GHOST (or equivalent) the disk so that EVERY customer gets the same thing.

    If you choose to rev the hardware or the OS, how will you make sure that your installed base has the same stuff? I can't emphasize enough how important this is to long-term support.

    You'll need to consider how to slipstream patches in (connection to your website, flash drive, CD, etc) for both the OS and your code.

    You'll need to design it so that you can upgrade the OS install without affecting your application (perhaps a separate filesystem?) What will happen after your customer has installed and used your application for 12 months and you decide to upgrade your code? Do they have customizations? Will your upgrade work?

    Hope these ideas help.

    • Your best bet is to tweak a build of the OS (shut down unneeded services, automatic updates, etc) then GHOST (or equivalent) the disk so that EVERY customer gets the same thing.
      We use a PXE boot proccess to do this at the company that I work for. Since your using linux its probably a good free alternative to GHOST so long as your hardware supports it.
      • PXE (Score:3, Interesting)

        by anomaly ( 15035 )
        Does your PXE process automatically partition/format the disk with the OS?

        I used PXE boot on Linux a few years ago with great success, but when I was considering doing an appliance-type solution, I created a customized system rescue CD [] which included a .tar.gz of each filesystem.

        This would have allowed me to script the partitioning process, as well as the extraction of the filesystems to end up with a bootable CD which would create an appliance hands-free. (At least that is what I was on target to complet
        • Precisely. The PXE image is just a small image that will looks at the ethernet address of the client node, and uses that to download the correct partition table information, partitions the 4 disks, creates raid devices, and downloads/installs images of the partitions. I wasn't the one who set it up, but have had to make tweaks here or there and the whole process works really well. I'd say we PXE a few hundred computers a day. The process takes about 15 minutes to do 32 computers at once on 320gb disks (
          • by pe1chl ( 90186 )
            I'm looking for a kind of "PXE bootselector". We use PXE for PC Windows installs and use a 3com tool (IMGEDIT) which creates a menu and can create .img files that basically are floppy images that you can select and boot. This can be used to start a Windows install, a manufacturer diagnostic disk, an old partition magic floppy, etc.

            However, because of memory management issues it seems to be impossible to use certain programs, including a Linux installer, from there.
            When I want to experiment with using a P
  • I would recommend looking at what other people in your situation are doing. Roaring Pengiun uses Debian for their appliances and push updates out to all systems. You could either open a dialog with their dev team (great people) or buy a low-end unit and look at the guts yourself. They give users complete control over the appliance which is nice.
  • Yes, I'm a corporate shill, but rPath's rBuilder tools are specifically designed to do what the poster is asking: manage appliances based on Linux: []
  • We [] are developing the port of Linux to the Nintendo DS. The project is based on uClinux. We have inherited uClinux' build system and CVS organisation.

    Just like in uClinux, our CVS repository [] contains everything (Linux kernel, uClibc C library, uClinux userland). It is very, very large (almost 1GB). It has multiple branches to keep imports of third party sources organised. I've written a page on our wiki [] that explains how we set things up in the repository.

    Not everyone is really happy with this. While I

  • That's pretty much it.
    Maintain your code in an svk repository.
    Generate .debs of all packages you modify in the system (including one for depending on your whole-system).
    Use cmake to build your binaries.
  • I tend to do basically what MPlayer [] does nowadays, and that includes using Subversion [] for SCM, Bugzilla [] for bugs, Mailman [] for mailing lists, and Apache HTTPD [] for serving it all. I like to maintain a Debian Sid package (you can't directly upload to stable or testing anyhow, so the Debian maintainers will take care of adapting the package for those distros), and if someone else on the team knows anything about RPM, we also maintain the .spec file as well. I also like to keep an emerge script in there as wel
  • Some ideas.... (Score:2, Informative)

    by charlesnw ( 843045 )
    I have written extensively on this problem at my Blog []. I use Morphix which is a system for building live cd's. They provide a core and you build on top of it. I have lightly modified the core (with a custom kernel and custom modules). Then you create a main module (which is just an xml file of debian packages). Morphix tools work out all the dependiences etc. I do all of my development in VmWare as it gives me a separate process space/machine to do all my work in. I will be presenting on it this Saturday at
  • "Linux gave us a huge jumpstart on development because of it's open nature and the information we've garnered from public mailing lists."

    Sure, because everyone knows than non-OSS operating systems don't have any documentation that allows programmers to easily develop for it and certainly not anything as comphrensive, consise, and organized as what you find on a public mailing list.
  • Dooooomed (Score:1, Flamebait)

    by MeanMF ( 631837 )
    I love when people hype up "free" software without thinking about all of the other things you need to have in place to get it working. Choosing a platform without looking at critical components like build tools and version control is inexcusable - find another job now before this project is completely doomed and you have no choice.
  • You would manage it the same way you manage any other project. Your application names may vary, but the methodlogy will be pretty much the same.

  • Say, something like LinuxLink [] from TimeSys. You can sign up for a free trial with LinuxLink, or if even that's too much for you, you can take a look at some free (as in beer) tools and FC5-based distributions built using LinuxLink on the TimeSys crossdev [] site.

    Of course, I'm a bit biased, since I'm a former TimeSys employee and helped build a lot of what they're offering :-). Having been through all the pain of building a few hundred Linux SDKs over the past five years, I'd really, really, really recomme

  • may not have to do a thing.

    As far as I know, most native Linux games (unreal, doom, etc) simply distribute an install script, which is essentially the Linux equivalent of a self-extracting zipfile. Distros are free to repackage it, as far as I know, and Gentoo does. Just work with the existing distro maintainers.

    You may have other problems, like making everything consistent across distros. You can either go the typical gaming route -- statically link everything you can, and include a couple of .so
  • You didn't decide that before you started pushing this out the door????


    That said, you can still salvage the situation. It would depend on what you're doing with the box.

    Personally, I'm not a big fan of the Debian distros because they don't update their packages often enough to suit me. That's really my only criticism of them though. The whole packaging system that they use is pretty powerful and I'm sure that you can bend it to your will to update what ever it is that you're rolling out.

    There are a
  • 1) Immediately go here [] and read the whole thing. Then keep it near you throughout the entire process of developing your product. Although not as strictly necessary, reading this [] site definitely won't hurt you either.

    2) Do not go near rpm.
    3) Do not go near dpkg/apt.

    I can safely say that there is no packaging system in existence for Linux which I anyway am completely happy with on all fronts. They all have egregious problems, and what is even worse, re-inventing the wheel tends to get virtual tomatoes thrown

The relative importance of files depends on their cost in terms of the human effort needed to regenerate them. -- T.A. Dolotta