and...what do you use to push out all the Docker images to all the machines?
and...what do you use to push out all the Docker images to all the machines?
"What if I want to KickStart a Desktop machine and don't want it to be a live image?"
Use Server. The Server network install image is the canonical thing to use for non-live installs of any kind, basically use it just as you'd use the netinst.iso in previous releases.
We're aware this sounds a bit weird, sorry about that. I can give you the *extremely* long version if you like, but the short version is that when it came to actually *implementing* the Product stuff there were the kinds of 'oh, so that doesn't quite work the way we thought it would' moments you'd expect in making such a significant change to an existing distro with existing release engineering tooling.
The upshot of one of them was that having Product-ish network install images turned out to be basically impossible to do, and after a while of banging our heads on trying to fix it we figured, you know what, we don't really need them anyway. Given how the practical implementation of the Products turned out for F21 at least, we can just have a single network install that can deploy anything, just like we did before.
Unfortunately by that point it wasn't really practical to try and set up some kind of new/old tree to build it out of and give it generic branding, so the story for F21 is: for anything like that, use Server. Use the 'Server' network install image for doing any kind of non-live deployment - the only 'Server' things about it are the visual branding and the fact that it *defaults* to the Server package set, but you can successfully deploy any Product or non-Product package set from it, it's functionally little different from the F20 generic network install image.
The Server/ tree on the mirrors is also the canonical source of things like the PXE boot kernel/initramfs, and the fedup upgrade initramfs.
Again, this obviously isn't optimal design, it's just kinda how things worked out in the F21 timeframe (there are some really boring release engineering considerations behind it all that I can explain if you're having trouble sleeping). For F22, all being well, it'll be cleaned up.
The 'Fedora' DVD wasn't actually an 'Everything' DVD, for the record. The repo tree called Everything has literally every package in it but is not 'composed', i.e. it doesn't have installer images and we can't build release media out of it. It still exists for F21. The Fedora repo tree in previous releases (it doesn't exist in F21) was what the DVD and netinst images were built out of. It didn't contain all packages, it contained the set of packages that was chosen to go on the DVD media - substantially fewer than are in the Everything tree.
The 'Fedora' generic DVD image was dropped as part of the whole Product-ization approach, basically the idea being there's a Product image or live spin for most use cases, and install via the Server netinst covers other cases. The specific case of 'I want to do an offline install with a custom package set that's covered by the old Fedora DVD package set but not the new Server DVD package set' is lost with this change, yep, we're sorry about that - ultimately to make a significant change like Products *something* had to be lost, and that's one of the things that was. The Fedora/ tree in the repos doesn't exist any more because its purpose was to build the Fedora DVD image.
Lennart doesn't have anything to do with firewalld, FWIW.
Choosing the Server product on upgrade will install the Server packages, including its firewall configuration and Cockpit, because...that's Server. If you just want to keep the existing packages you have installed, choose 'nonproduct'. You can remove Cockpit if you don't want it.
Correct, it's not considered to be one of the Products. It's just Fedora.
There isn't one, the Cinnamon maintainer doesn't want to make one, for some reason. You can install from the 'Server' netinst and choose the Cinnamon package set, though, that'll work fine.
it doesn't 'clean' anything. it switches to a new file, and whenever you read the journal, reads as much data as it can from the corrupted file.
of course it doesn't. it has sensible maximum file sizes (both in terms of absolute size and percentage of the filesystem they're on).
"This means it wont find and grubify existing OS installs, including windows. "
No, it doesn't. We test that. It works fine. It uses os-prober, see
Fedora Release Day Drinking Game: At least two idiotic comments about systemd in the first five on
A Fedora community member releases periodic respins of Fedora stable releases; they're not official releases and they don't go through QA but FWIW I'd trust the guy if I needed a respun image in a pinch. http://jbwillia.wordpress.com/ is his site, you can find the spins at https://alt.fedoraproject.org/...
well, ultimately the init system launches *everything* if you take a broad enough view of things, but that doesn't mean the init system is somehow responsible for your desktop environment's display configuration.
I mean, if you're really determined to, you can 'configure' things by modifying their init scripts, sure, but it's usually not the right way to do it. X has a perfectly good configuration system already. So does GNOME. If you want to change the DPI at one or other of those levels, go configure it through their configuration systems. That's how it's supposed to work.
That's far too reductionist. For a start, there are many sysv-compatible init implementations that do parallel boot; upstart does it, Mandriva's pinit does it. There's a whole subset of LSB that exists exclusively to provide a way for sysv initscripts to represent dependencies *precisely in order to enable parallel init* - see https://wiki.debian.org/LSBIni... for a good write-up of that.
Secondly, insofar as systemd is intended to improve boot speeds, it wasn't actually just about implementing simple parallelization of sysv-style services using dependencies. If you read http://0pointer.de/blog/projec... it talks a lot about parallelization but it's actually talking about making *more* parallelization possible, not just *implementing* parallelization: the big idea Lennart had back then was the idea that you don't actually have to completely start up a service in order to start up another service that 'requires' it, if you can create the socket it listens on before it's ready, then queue up any requests and pass them on to the service once it's actually done starting up. Lennart was clearly really excited about this idea at the time, but if you look at systemd these days, it's a really pretty small corner of all the things it does.
All the way through the first part of that first post, Lennart is really talking about making more parallelization possible, he's not simply talking about implementing inter-service dependencies.
These days systemd does an awful lot more, and it really isn't just about making boot faster any more. Even in the very first post, once you get past the first half, it starts talking about improved capabilities. I find startup speed the least interesting thing about systemd, really, I'm much more interested in the improved capabilities for units and especially in the improved logging journald provides.
"Since RedHat's obviously the largest major proponent"
For the record, there's absolutely nothing 'obvious' about that. People tend to assume that since Lennart was @redhat.com when he wrote systemd it's 'obviously' a Red Hat project, but it really isn't, and never was. It's a Lennart project: he came up with the idea and he wrote it. Red Hat didn't ask for it, didn't actually have any idea it was coming.
The very first instance of all these battles that get fought every six weeks in some distro or on
"Today's robots are very primitive, capable of understanding only a few simple instructions such as 'go left', 'go right', and 'build car'." --John Sladek