Comment Re:Incremental approach (Score 1) 455
I stand corrected on the autologin issue, thanks.
Another thing the new GDM cannot do is to specify the arguments of the X server command line (e.g. for running two X servers in a multiseat setup).
I stand corrected on the autologin issue, thanks.
Another thing the new GDM cannot do is to specify the arguments of the X server command line (e.g. for running two X servers in a multiseat setup).
I think it is not because of an incremental approach of GNOME, but rather because of their decremental approach.
Things like replacing GDM with a rewrite that still does not match the original GDM feature-wise (it even could not do XDMCP for a long time and it cannot do auto-login for single-user systems even now), replacing Sawfish with Metacity, replacing Galeon with Epiphany, which - even with epiphany-extensions package - still cannot match Galeon (despite the fact the development of Galeon has been dormant for several years now), etc.
I guess the next decremental step would be kicking out Ekiga in favour of Empathy.
How good are LVM snapshots at rolling back a hosed OS?
About the same as VM snapshots. For example, Solaris uses ZFS snapshots for OS upgrades by default.
How easy is it to:
Use the same hardware to run a test environment for that case without any virtualization products like OpenVZ/Virtuozzo or Xen/VMware etc?
It depends whether your application has any hardcoded paths in it, whether it can use e.g. different TCP ports, whether it is easy to impose ulimits on it, etc. Some can do it, some cannot. But still, it is much faster to have (say) two VMs - one for testing and one for production use, but definitely not one per application.
Just a minor nitpick: for rolling back a SW upgrade, you can use LVM/FS/storage snapshots, which are as good as the virtual machine snapshots.
And of course, many HW problems affect the whole system, the virtual machines have no means of magically escaping from this.
Don't get me wrong, I also use virtual machines for many purposes. I have just wanted to point out that a "one application = one virtual machine" approach is quite insane. We already have a means of isolating applications from each other - it is called "an operating system kernel". Virtualization just brings an unnecessary overhead in this case. OTOH, running a virtual machine with _many_ applications for the purpose of live migration and whatnot - this a right use of virtualization.
What do you do when you have to go down to the physical machine to patch firmware/bios? You lose all your applications, right?
Well, I patch the firmware probably once or at most two or three times in the server HW lifetime. OTOH, patching the OS kernel is way more frequent in my part of timespace. And the new kernel means server reboot, be it virtualized or not.
The point of virtualization is mostly to isolate the applications which require different operating systems or OS versions (with a minor added bonus of faster reboot time and live migration). But a separate virtual host per application is simply insane. After all, it is the operating system kernel which has been designed to provide a more-or-less "virtualized" view of the hardware for the applications. One OS image can more often than not run multiple applications without a problem.
Paul Vixie, president of the Internet Systems Consortium, described the fault bluntly. "It can be exploited by any greedy Estonian teenager with a $300 Linux machine."The specification, known as the Type 0 Routing Header (RH0), allows computers to tell IPv6 routers to send data by a specific route. Originally envisioned as a way to let mobile users to retain a single IP for their devices... RH0 support allows attackers to amplify denial-of-service attacks on IPv6 infrastructure by a factor of at least 80.
May Euell Gibbons eat your only copy of the manual!