So here you go. Far too much conceptual information about a process I suspect almost no one here knows beyond the few that already mentioned it. Enjoy.
So the best I can do is telling you how I do it for about 400 Mac's, and the tools I use. I basically use two OS-X 10.6 servers that host NetBoot images and Radmind, and then Apple Remote Desktop (ARD) on a client to control events occurring on all the clients be they booted locally or NetBooted.
I'll also be up front, if you are not computer savvy, and don't want to be, do not touch Radmind with the idea of using it to deploy anything beyond software to an already existing deployment. Stick with an image based package. If however you are computer savvy, can get around a command line, and need to support an unlimited number of *nix machines, especially in a lab, Radmind is an incredibly strong tool.
I solely use Radmind for both OS deployment and software updates because it's a delta based package and tripwire system which you don't need to rebuild over time unless an administrator makes horrible mistakes without a backup. If I really needed an image, I would have Radmind generate that build for me and then use 10.5/10.6's NetBoot/NetInstall creation tool on the results.
I do not use NetRestore, NetInstall, or any other deployment tools for OS-X. It is a waste of time to constantly rebuild and maintain various images over time vs a delta based deployment system, especially when I'm the only one supporting the image. It may take *slightly* longer to deploy than a sector based image, but the amount of effort placed on the administrator in the long term significantly decreases. Sure, learning Radmind might take a whole lot of time and effort, but the more random and variously configured machines you need to support are, the more attractive it becomes to spend time learning how to use it beyond a software package deployment tool. Heck, the right people behind it could probably support thousands of *nix servers without much of any effort.
You can also reverse the use of Radmind over time to maintain just software packages by making a negative transcript targeting just ".". If you do that, and make sure clients don't see the overall OS level packages, you can update software only without updating the OS at its core.
So radmind has a set of tools that come with it, and I'm only going to mention the most critical of them. One scans a computer for changes. Two other apps takes that scan and either uses it to upload data to a server, or to use the knowledge on the server to 'cause' changes to the client. Another downloads the command lists from the server, and those command lists have knowledge of all the "package" transcripts that actually define almost every file on the computer. Using them all in combination in scripts by someone that knows how to manipulate the results are what can make Radmind powerful.
Up front there are negatives and positives about Radmind:
Negatives:
It can be very complicated.
A lot of the documentation is poor, though it's better today than when I started using it.
Simple mistakes in a transcript can suddenly prevent the client-side app from functioning. Discovering why can sometimes be very difficult. (especially if it's a nested command file level issue that only gives you "Input/Output error" when lapply crashes.)
It only supports network compression, which frankly is worthless. No file-based compression during capture.
Almost any error in a delta file will break process of updating/deploying machines. It really requires you have someone learn it in and out.
The default method of deploying images to massive numbers of machines that may need different builds is unwieldy. There are ways around some of this.
The GUI console in OS-X once you have several hundred transcripts is annoying to use, and creating and using subfolders for transcripts or command files will seriously screw your deployment life up.
It has no GUI on anything except OS-X, so your master Radmind server is best to run on OS-X.
Little about the way I use Radmind is publically documented, I took what Radmind comes with and scripted a deployment system around it.
Unless you script it, doing single-application deployments is tedious.
A full maintenance run will undo all end-user changes unless you are explicitly ignoring that area of the disk.
You have to script it so the computer renames it's self properly after maintenance, best to do by a MAC address client list. Or you have to ignore the file that contains the workstation name.
Positives:
It's open source; BSD licensed.
You can manage any variant of Linux/Unix/OS-X with Radmind. There is also a Windows version, but due to differences in how Windows functions vs *nix makes that variant much less valuable.
It's powerful, but under the control of someone that can script around it, it's possible the most powerful deployment tool out there.
There is practically no environment it Radmind couldn't deploy to so long as we're talking *nix and the client tools are compiled for it. The primary limitation of Radmind is the knowledge of the Administrators using it.
The OS-X server version at least has a GUI which makes configuring machines more object oriented. It's far easier doing .K files with the server GUI than it is with a text editor.
There is enough public documentation for you to learn the basics of how to use it, but any serious use of it as a deployment system requires scripting.
The OS-X client GUI for capturing images is good to use to learn how the terminal commands work, but over time you'll completely abandon the tool and just use scripts to build new delta transcripts. Once you start using multiple command files for nesting complex builds the client GUI will completely fail to function.
It's easy to make any client a replica Radmind server and clone all the delta transcripts/files to it. Technically almost any system could be a replica of your Radmind configuration. Helps to separate your 'staging' environment from production if you just make your productions synchs with staging when you consider new delta's are final.
If you know bash scripting, you can fully make your delta deployments emulate the idea of software groups or deployment groups. You deploy knowledge of every build to your client and then your scripts can filter out what's unneeded.
It supports sha1 hashing for validating files sent to the server or client. You can be sure with this that your delta images on the server are not corrupt. It can also confirm that every file on the client is 100% what it should be, making this tool akin to a system-repairable version of tripwire.
You don't need deployment protection tools such as DeepFreeze. Radmind acts like tripwire in detecting changes, except it can undo them.
A full maintenance run will undo all end-user changes unless you are explicitly ignoring that area of the disk. This makes updated systems as good as if they were newly deployed.
You can update targeted directories vs. the entire OS if you want. Good for updating public user accounts every time they log out.
It has some limited ability to internally understand how to deploy unique builds to systems based on IP/SubNet, and some other system side information. I however do not use this part of Radmind, and push out a base command.K file that contains information about everything which a script then filters down.
As an example, this is how I use Radmind, with some limited terminology.
Command (.K) files are the list of what to do to a machine. Transcript (.T) files are the actual "package" being deployed. "Positive" transcripts are changes you make to a machine, "Negative" transcripts are sections to ignore on a computer.
Radmind starts out with a base.K file. I create a bunch of new ones, which I should mention now, as I recall doing this will instantly prevent the GUI client side deployment/capture tool from functioning.
os.k
os^OS-X_10.4_Client^PPC.K
os^OS-X_10.4_Client^Intel.K
os^OS-X_10.5_Client.K
os^OS-X_10.5_Client^Avid_and_ProTools.K
os^OS-X_10.6_Client.K
department.K
department_name1.K
department_name2.K
software_level_1.K
software_level_2.K
room.K
room^room_number.K
computer.K
computer^name1.K
computer^name2.K
In that list above, os.K contains all .K's that start with os^. That first level of room.K, computer.K, and department.K work the same way. Software level 1 is for software that everyone gets. Level 2 is video production/3d applications that only certain areas need (as these packages eat up far more disk space vs Adobe CS4 and web based apps.). The more specific .K files are the ones I actually put software packages in, the first and second level of nested .K files I *never* put software packages in, but you will probably use it far different than me at first.
So I have a name table list that contains the MAC address of every computer in my origination that I control for both en0/en1 (physical/wireless typically). Inside that file, I list the department, room number, name, etc of every computer. I script it so deployment uses this information to *remove* excess information from command files.
Why I do it all this way is difficult to explain in one go, but basically it lets me do far more flexible deployments where default Radmind deployment methods are extreamly ridged. This is more of an example of how it can be used, not how anyone would use it on the first attempt. Primarily because the GUI tools for OS-X will really help you understand how to use the command line.
Lets see... I also recommend use of "./" in all your paths in Radmind, and to make that your default when you first use the GUI client. Using radmind as your only deployment system is impossible without being able to use relative paths to target the root of a drive that you're not booted from. You can always change it down the road, but I regretted doing that once I realized it could be used for deployment over NetInstall.
Plus, when you capture an OS for the first time, you normally use negative transcripts to prevent capture of data that is in use. You also must do a second one with a negative transcript that ignores an item of no overall deployment value (like ./private/tmp). This second pass I call the exact same as the first, except append "^Offline.T" to the end of it. I always capture system level transcripts in two delta stages; for example "OS-X^10.5.8^Update.T" and "OS-X^10.5.8^Update^Offline.T". The "^Offline.T" file contains all other data not discovered on a system during the first pass. My scripts know when a machine is NetBooted vs booted off the local disk, and "^Offline.T" files which contain items that are normally in use (such as the user account database) are only applyable when NetBooted.
Also, for OS-X Clients, avoid case sensitivity support in both your file system and in Radmind unless you really require it. I recommend going into your Radmind server settings and make it case-insensitive by default. All vendors are very inconsistent about naming directories with the correct case level over time between product versions, and case sensitive may eventually render Applications (or the entirety of your Radmind set) useless and cause you to rebuild from scratch. It gave me nothing but heachachs. Anyway, that was my experience with it during the 10.2 days, I stopped doing that by the time 10.3 came out.
Phew. That's all I care to say right now. Hope it helps.