Xen has been ported to ARM64 aswell! In addition to Xen port to ARM32 / ARM Cortex-A15.
LarsKurth writes: "Xen 4.2 has just been released: the culmination of 18 months and almost 2900 commits and almost 300K lines of code of development effort, by 124 individuals from 43 organizations (including AWS, AMD, Devise, Fujutsu, and many others). Topline features are: new toolstack, support for larger systems, improved security (developed by NSA), more documentation and better usability"
Link to Original Source
Link to Original Source
And just to confuse people even more there's the concept of Xen "driver domains", which disaggregates device drivers from dom0 to multiple VMs. "Driver domains" model allows you to run device driver for a piece of hardware in a VM - ie. use Xen PCI passthru to assign a PCI device to some VM, run the driver for the hardware in the VM, and also run Xen backend driver in the VM, so you can provide a virtual disk or virtual network from the "driver domain", instead of from dom0. Btw this Xen "driver domains" concept allows for example running Solaris+ZFS in a Xen VM and to provide virtual disks for other VMs from ZFS..
XVM (Solaris Xen dom0) definitely works in OpenSolaris build 134, released in March 2010. If I remember correctly people have been using it also on later builds. Feel free to prove me wrong if that's not the case. Actually it seems XVM is still included in Oracle Solaris 11 Express 2010.11 release. It could be removed from final upcoming Solaris 11 release, but it's still there in the 2010.11 release. Latest XVM hypervisor version is based on Xen 3.4.2, and afaik it has some not-upstreamed patches to the hypervisor. What I wrote in my earlier post was: Someone should upstream the not-yet-upstreamed patches from the XVM *hypervisor*, and at *that* point (after upstreaming) you can use the upstream xen.org *hypervisor*, be it 4.1 or whatever version. Another and separate issue are the Solaris kernel patches needed to run as Xen dom0. Some developer should obviously maintain them, and to begin with make sure they work with the latest Illumos/OpenIndiana kernel. Like said, if there are developers interested in this, it is possible to make it work also in the latest builds. It did work one year ago.
Opensolaris kernel source code from autumn 2010 (when it was still available from Oracle) does contain Xen dom0 code in the kernel. Some people are using it. I think OpenIndiana ships the XVM/Xen code. There are some known bugs in it, so someone interested in *Solaris (Opensolaris/OpenIndiana/Illumos) dom0 would have to spend some time fixing the bugs and making sure any not-yet-upstream patches (if any) from Solaris XVM hypervisor are applied to upstream Xen hypervisor, and at that point you can just use the upstream xen.org hypervisor with the *solaris kernel as dom0.
Remember Xen hypervisor (xen.gz) is a separate piece of code. Kernel (Linux, BSD, Solaris) support for Xen dom0 is another thing. So if someone wants, they could run *Solaris dom0 kernel on Xen 4.1 even if Oracle does not ship it. It might require some work to get the *Solaris dom0 kernel working properly with newer Xen version. Oracle itself has focused on OracleVM, which is their Xen-based virtualization product, using Linux as dom0. You can run Solaris VMs on OracleVM/Xen, of course.
.. Most users end up using "upstream kernels" when their distros release new version with new kernel.. Distros like Ubuntu and Fedora release two times a year so they don't want to do custom porting of patches, they want all the features to be in upstream Linux kernel. Linux distributors like Redhat have "upstream first" policy, which means features need to be in upstream Linux kernel before they can be included (backported) to products. So when Xen dom0 support is now in upstream Linux kernel it means most distros can actually ship it easily!
Xen is a secure baremetal hypervisor (xen.gz), around 2 MB in size, and it's the first thing that boots on your computer from GRUB. After Xen hypervisor has started it boots the "management console" VM, called "Xen dom0", which is most often Linux, but it could also be BSD or Solaris. Upstream Linux kernel v3.0 can run as Xen dom0 without additional patches. Xen dom0 has some special privileges, like direct access to hardware, so you can run device drivers in dom0 (=use native Linux kernel device drivers for disk/net etc), and dom0 then provides virtual networks and virtual disks for other VMs through Xen hypervisor. Xen also has the concept of "driver domains", where you can dedicate a piece of hardware to some VM (with Xen PCI passthru), and run the driver for the hardware in the VM, instead of dom0, adding further separation and security to the system. Xen "Driver domain" VMs can provide virtual network and virtual disk backends for other VMs. KVM on the other hand is a loadable module for Linux kernel, which turns Linux kernel into a hypervisor. The difference is that in KVM all the processes (sshd, apache, etc) running on the host Linux and the VMs share the same memory address space. So KVM has less separation between the host and the VMs, by design. VMs in KVM are processes on the host Linux, not "true" separated VMs.
Remember Xen hypervisor is opensource (GPL), just like Linux kernel, so all the Oracle and Citrix code in the hypervisor and in the kernel is opensource. Citrix uses XenServer as a platform to run their other products, and obviously Xen is the best platform to run those Citrix "windows products". Novell ships Xen in Suse Linux Enterprise (SLES) 10 and 11. Debian ships Xen in their current version. I heard Ubuntu is going to add Xen back now when the kernel components are included in upstream Linux. Fedora ships Xen aswell. Not to mention majority of the cloud (Amazon EC2, Rackspace, etc) are running Xen.
Actually the design is pretty different. Take a look at these slides: http://www.slideshare.net/xen_com_mgr/why-xen-slides . That should explain the differences. Xen is also multi-OS, ie. you can use also BSD/Solaris in addition to Linux as a Xen host, while KVM is Linux-only as host.
At the Xen Hack-a-tron event in March there was some discussion/updates about Xen+FreeBSD.. I can't remember the details, but you might want to ask on xen-devel mailinglist.
Xen has features that KVM doesn't have (by design). For example Xen "stubdomains" and "driver domains", full memory address space separation between domains, etc.. and of course it's good to have multiple opensource virtualization platforms, competition is a good thing!
Actually you have been able to run newer kernel on EC2 for a long time! Xen domU (guest VM) support has been in upstream Linux kernel since version 2.6.24. Now upcoming Linux kernel 3.0 adds Xen dom0 support, which is the *host* support, ie. Linux kernel 3.0 can run on Xen hypervisor (xen.gz) as the "management console", providing various backends (virtual networks, virtual disks) allowing you to launch Xen VMs.
It's mentioned on the Xen4.0 wiki page since that's when the feature was developed. http://wiki.xen.org/xenwiki/Xen4.0 . I think it was originally submitted by Novell developers, who wrote it for SLES11. Later Jeremy (pvops kernel maintainer) submitted it for upstream kernel inclusion, and it got included to upstream Linux in 2.6.36. It's also included in xen.git xen/stable-2.6.32.x branch. I've tested it myself, and all you need to do is to resize the LVM volume in dom0, and domU will detect the size change.
If you're using Xen 4.1.0 then 2.6.39-rc1 or newer as dom0 kernel will allow you to run VMs, since 2.6.39 includes xen-netback backend driver, and Xen 4.1 includes userspace/qemu-based blkback backend implementation. There's ongoing process to upstream the remaining bits.. based on earlier experiences upstreaming all at once clearly doesn't work, so it has to be done in small incremental steps
Meanwhile there is 2.6.32 based long-term maintained tree with full dom0 support included available in xen.git in xen/stable-2.6.32.x branch.