Xen how does it work




















Typically, operating systems running in paravirtual mode enjoy better performance than those requiring full virtualization mode. The following graphic depicts a virtual machine host with four virtual machines. The Xen hypervisor is shown as running directly on the physical hardware platform.

Note, that the controlling domain is also just a virtual machine, although it has several additional management tasks compared to all other virtual machines. The two virtual machines shown in the middle are running paravirtualized operating systems.

The virtual machine on the right shows a fully virtual machine running an unmodified operating system, such as Windows Server or Windows XP. After you install the virtualization components and reboot the computer, the GRUB boot loader menu displays a Xen menu option. The terminals of VM Guest systems are displayed in their own window inside the controlling Domain0 when opened. Note that PV stands for paravirtualization , while FV stands for full virtualization. Because vast majority of our customers already moved to bit Xen hypervisors, we decided to focus the development and testing efforts to support bit Xen hypervisors only.

This means that only bit xbased VM hosts are supported. This does not affect VM guests—both bit and bit flavors are supported.

Please consider that the VM host server needs at least MB of memory. DomU and Dom0 are described in a later article in this series. The guest OS is aware that it is running on Xen and calls the paravirtualized drivers offered by the hypervisor. But some operations still require the Qemu emulator. Specifically the OS is still booted using the hvmloader and firmware that require emulator support. It provides the best performance currently possible and calls for fewer resources on the guest OS than pure PV.

The first version of this innovation, PVHv1, did not simplify the operating system. In Xen 4. Figure 2, from the Xen Project website, shows how each aspect of virtualization is handled by these different technologies. Figure 2. Differences between Xen Project technologies. Some sites want to run one VM inside another, in order to test a variety of hypervisors.

Running a hypervisor inside of a virtual machine is called nested virtualization. The main hypervisor that runs on the real hardware is called a level 0 or L0; the hypervisor that runs as a guest on L0 is called level 1 or L1, and finally, a guest that runs on the L1 hypervisor is called a level 2 or L2. This technology has been supported in Xen since version 3.

As explained, Xen is a type-1 hypervisor that runs directly on the hardware and handles all its resources for the guest, including CPU, memory, drivers, timers, and interrupts. After the bootloader, Xen is the first program that runs. Xen then launches each guest. Just as operating systems commonly separate the root user or superuser from other users, and give the root user special powers and privileges, Xen distinguishes the difference between the host and the guests by defining domains; each domain has access only to the resources and activities allowed to that guest.

Certainly all domUs should be shutdown first, following the sort order of the rc. However, the dom0 sets up state with xenstored, and is not notified when xenstored exits, leading to not recreating the state when the new xenstored starts. Until there's a mechanism to make this work, one should not expect to be able to restart xenstored and thus xencommons. There is currently no reason to expect that this will get fixed any time soon.

There are at least two additional things different about NetBSD as a dom0 kernel compared to hardware. In NetBSD-current, there is only one set of modules. While this is roughly agreed to be in large part a bug, users should be aware of this and can simply add missing config items if desired.

Updating Xen in a dom0 consists of updating the xnekernel and xentools packages, along with copying the xen. If updating along a Xen minor version, e. The point is that the xentools programs will be replaced, and you will be using "xl" from the new installation to talk to the older programs which are still running.

Problems from this update path should be reported. For added safety, shutdown all domUs before updating, to remove the need for new xl to talk to old xenstored. Note that Xen does not guarantee stability of internal ABIs.

If updating across Xen minor versions, e. Therefore, 'make replace' of xentools on a dom0 with running domUs is not recommended. A shutdown on all domUs before replacing xentools is likely sufficient. Single user is another option. This is just like updating NetBSD on bare hardware, assuming the new version supports the version of Xen you are running.

Note that one should update both the non-Xen kernel typically used for rescue purposes, as well as the DOM0 kernel used with Xen. This section describes general concepts about domUs. It does not address specific domU operating systems or how to install them. The domU is provided with disk and network by the dom0, mediated by Xen, and configured in the dom0. Entropy in domUs can be an issue; physical disks and network are on the dom0.

The following is an example minimal domain configuration file. The domU serves as a network file server. The domain will have name given in the name setting.

The vif line causes an interface to be provided, with a specific mac address do not reuse MAC addresses! Two disks are provided, and they are both writable; the bits are stored in files and Xen attaches them to a vnd 4 device in the dom0 on domain creation. The system treats xbd0 as the boot device without needing explicit configuration. There is not a type line; that implicitly defines a pv domU. Otherwise, one sets type to the lower-case version of the domU type in the table above; see later sections.

Note that "xl create" takes the name of a config file, while other commands take the name of a domain. Shutting down a domain is equivalent to pushing the power button; a NetBSD domU will receive a power-press event and do a clean shutdown. Shutting down the dom0 will trigger controlled shutdowns of all configured domUs. A domain is provided with some number of vcpus; any domain can have up to the number of CPUs seen by the hypervisor.

It is normal to overcommit vcpus; a 4-core machine machine might well provide 4 vcpus to each domU. One might also configure fewer vcpus for a domU. In the straightforward case, the sum of the the memory allocated to the dom0 and all domUs must be less than the available memory.

Xen provides a balloon driver, which can be used to let domains use more memory temporarily. Common methods are "file:" for a file-backed vnd, and "phy:" for something that is already a device, such as an LVM logical volume. The second element is an artifact of how virtual disks are passed to Linux, and a source of confusion with NetBSD Xen usage. Linux domUs are given a device name to associate with the disk, and values like "hda1" or "sda1" are common.

However, xl demands a second argument. With NetBSD as both dom0 and domU, using values of 0x0 for the first disk and 0x1 for the second works fine and avoids this issue. It is quite possible to have virtualization features in the chipset that cannot be enabled because the mobo isn't designed for it. Having said all of that, sometimes the easiest or only way to see what is supported is to check the BIOS. However, it is highly recommended so that you have the widest number of options for virtualization modes once you get underway.

Paravirtualization will work fine though. It is worthwhile digging around on this a bit. You may even find one is enabled by default but the other is not! Consult your motherboard documentation for more assistance in enabling virtualization extensions on your system. Burn the ISO to disk using your computer's standard utilities. Linux has wodim among others or use the built in ISO burning feature in Windows.

Debian is a simple, stable and well supported Linux distribution. It has included Xen Project Hypervisor support since Debian 3. Debian uses the simple Apt package management system which is both powerful and simple to use. Installing a package is as simple as the following example:.

Many popular distributions are based off of Debian and also use the Apt package manager, if you have used Ubuntu, Linux Mint or Damn Small Linux you will feel right at home. Install the system The Debian installer is very straight forward. Follow the prompts until you reach the disk partitioning section.

Format it as ext3. Create another partition approximately 1. When you reach the package selection stage only install the base system. If you want to set up a graphical desktop environment in dom0, that's not a problem, but you may want to wait until after you've completed this guide to avoid complicating things.

You can find out details of the Debian installation process from the Debian documentation. If you've got any hardware you're not sure open source drivers are available for , you may want to install non-free firmware files via:.

We've still got a few more steps to complete before we're ready to launch a domU, but let's install the Xen Project software now and use it to check the BIOS settings. All of this can be installed via an Apt meta-package called xen-linux-system. A meta-package is basically a way of installing a group of packages automatically. Apt will of course resolve all dependencies and bring in all the extra libraries we need.

Now we have a Xen Project hypervisor, a Xen Project kernel and the userland tools installed. When you next boot the system, the boot menu should include entries for starting Debian with the Xen hypervisor. One of them should be highlighted, to start Xen by default. Do that now, logging in as root again.

Next, let's check to see if virtualization is enabled in the BIOS. There are a few ways to do that. The most comprehensive is to review the Xen section of dmesg created during the boot process. This will be your first use of xl, the very versatile Xen tool, which we will come back to shortly to create and manage domUs:. If nothing comes back and you think it should, you may wish to look through the flags yourself:.

If the virtualization extensions don't appear, take a closer look at the BIOS settings. A few round-trips through the BIOS are often required to get all the bits working right.

It is a technology that allows Linux to manage block devices in a more abstract manner.



0コメント

  • 1000 / 1000