<quote><blockquote><div><p>The company was <b>set up to make a fantastic Linux distribution</b> and other tools around it and get it out there and get people using it. That was the focus." That's now changing at Canonical as the emphasis is now shifting to generating revenues.</p></div></blockquote><p>We're fine with moving priority to the new objective as soon as you've completed the former.;-)
Ubuntu 10.04 <a href="http://it.slashdot.org/story/10/04/21/2021247/Ubuntu-LTS-Experiences-Xorg-Memory-Leak">presumably</a> is not it just yet.</p></quote>
eapache writes: "My desktop system is currently set up as a dual-boot between Linux and Windows. I was booted into Linux the other day when I realized that I wanted to do something quick in Windows, then continue my work in Linux. While shutting down Linux, booting Windows, doing the task, shutting down Windows, and rebooting Linux, I decided that the amount of work the computer was doing and the amount of time it took for all of this rebooting was a bit absurd. I pondered solutions, and the one that immediately came to mind was virtualization: if I had both operating systems as virtual machine guests, I could then have them both running and switch back and forth easily.
It isn't as simple as it sounds though. Firstly, both OSes are already installed natively, and as far as I know there isn't an easy way to insert a useful hypervisor underneath an already installed OS. Secondly, I need practically native performance on both OSes, which removes the option of reinstalling Linux in VirtualBox on Windows. I gave the problem a little more thought, and came up with the following idea of a bare-metal kernel-level hypervisor.
In my hypothetical hypervisor, there is no host 'above' any of the guests. Instead, the entire hypervisor runs as a kernel-level process inside each of the guests. One would install all of the OSes natively, then install the hypervisor into each of them individually. When the computer was turned on, and a boot menu such as GRUB was used to choose an OS, the OS would load and run 100% natively. The only overhead would be that of an extra kernel module, which wouldn't do anything 99% of the time.
The interesting part comes when the user decides to start the second OS in parallel with the first. This command would be issued to the hypervisor in the first OS, which would then put the rest of the first OS into a pseudo-suspended state via some complicated power-management hooks. The first OS is now lying entirely dormant in memory, and control has passed to a temporary process much more like a traditional bare-metal hypervisor, spawned by the kernel module in the first OS. This process would then act as a boot-loader and load the second OS into a sufficient block of free memory, passing a kernel option to the hypervisor in the second OS telling it about the first. The extra process outside both OSes is no longer needed and can be destroyed, and control passes to the second OS, running entirely natively. All it knows is that a significant chunk of memory (the first OS) is permanently in use. When a user wants to switch between two already loaded OSes, the kernel module puts the currently active OS into a pseudo-suspended state, and passes off control to the kernel module of the sleeping OS, which then wakes up and begins to operate normally.
The main disadvantage to this technique is that only one OS may be active at a time. However, with sufficient RAM it allows switching between both OSes in seconds, and lets them run with absolutely nothing between them and the bare metal. Additionally, it allows the conversion of a normal non-virtualized multi-boot system into one where every OS is a guest. Keeping all this in mind, my questions to the Slashdot crowd are "How feasible is this?" and "Has it already been done?". There is nothing here that I believe is technically impossible, and a quick Google turns up nothing related, but I'm sure there is something I'm missing."