Getting started with Xen Virtualization On CentOS 7.x

Welcome to Getting started with Xen Virtualization On CentOS 7.x. Xen is an open-source bare-metal hypervisor which allows you to run different operating systems in parallel on a single host machine. This type of hypervisor is normally referred to as a hypervisortype 1 in Virtualization world.

Meltdown and Spectre Mitigation on Xen 6.5 and Xen 7.x

Xen is used as a basis for Server virtualization, Desktop virtualization, Infrastructure as a service (IaaS) and embedded/hardware appliances. The ability of physical host system to run multiple guest VMs can vastly improve the utilization of the underlying hardware.

Cutting-edge features of Xen hypervisor

Xen is operating system agnostic – Main control stack (domain 0) can be Linux, NetBSD, OpenSolaris e.t.c
Driver Isolation capability – Xen can allow the main device driver for a system to run inside of a virtual machine. The VM can be rebooted in case of driver failure/crash without affecting the rest of the system.
Paravirtualization support: This allows fully paravirtualized guests to run much faster as compared to a fully virtualized guest using hardware virtualization extensions (HVM).
Small footprint and interface -Xen hypervisor use microkernel design resulting in a footprint of around in1MB size. This small memory footprint and limited interface to the guest makes Xen more robust and secure than other hypervisors.

Xen project packages consist of:

  • Xen Project-enabled Linux kernel
  • Xen hypervisor itself
  • Modified Version of QEMU – support for HVM
  • Set of userland tools

Xen Components

The Xen Project hypervisor is responsible for handling CPU, Memory, and interrupts as it runs directly on the hardware. It runs immediately after exiting bootloader. domain/guest is a running instance of virtual machine.

Below is a list of Xen Project Components:

  1. Xen Project hypervisor: It runs directly on the hardware. The hypervisor is responsible for managing memory, CPU, and interrupts. It has no knowledge of I/O functions such as networking and storage.
  2. The control Domain (Domain 0): Domain0 is a special domain which contains the drivers for all the devices in the host system and control stack to manage virtual machine lifecycle – creation, destruction, and configuration.
  3. Guest Domains/Virtual Machines: Guest refers to the operating system running in a virtualized environment. There are two modes of virtualization supported by Xen hypervisor:
    1. Paravirtualization (PV) :
    2. Hardware-assisted or Full Virtualization (HVM)
      Both of the above guest types can be used at the same time on a single hypervisor. Paravirtualization techniques can as well be used in an HVM guest (PV on HVM) – essentially creating a continuum between PV and HVM.
      The Guest VMs are called Unprivileged domain (or DomU) since they have no privileged access to hardware or I/O functionality. In other words, they are totally isolated from the hardware.
  4. Toolstack and Console: Toolstack is a control stack under whichDomain 0 allows a user to manage virtual machine creation, configuration, and destruction. It exposes an interface that can be used on command line Console. on a graphical interface or by a cloud orchestration stack such as OpenStack or CloudStack. A console is the interface to the outside world.

PV vs HVM
Paravirtualization (PV)

  • Efficient and lightweight virtualization technique that was originally introduced by Xen project.
    Hypervisor provides API used by the OS of the Guest VM
  • Guest OS needs to be modified to provide the API
  • Does not require virtualization extensions from the host CPU.
  • PV guests and control domains require PV-enabled kernel and PV drivers to let the guests be aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware.

Functionalities implemented by Paravirtualization include:

  • Interrupt and timers
  • Disk and Network drivers
  • Emulated Motherboard and Legacy Boot
  • Privileged Instructions and Page tables

Hardware-assisted virtualization (HVM) – Full Virtualization

  • Uses CPU VM extensions from host CPU to handle Guest requests.
  • Requires Intel VT or AMD-V hardware extensions.
  • Fully virtualized guests do not require any kernel support. Hence Windows operating systems can be used as a Xen Project HVM guest.
  • The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphics adapter, USB controller, network adapter etc.
  • Performance of emulation is boosted using hardware extensions.
  • In terms of performance, fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
  • Note that it is possible to use PV Drivers for I/O to speed up HVM guest

PVHVM – PV-on-HVM drivers

  • PVH mode combines the best elements of HVM and PV
  • Allows H/W virtualized guests to use PV disk and I/O drivers
  • No modifications to guest OS
  • HVM guests use optimized PV drivers to boost Performance – bypasses the emulation of disk and network IO resulting in a better performance on HVM systems.
  • Optimal performance on guests operating systems such as Windows.
  • PVHVM drivers are only required for HVM (fully virtualized) guest VMs.

Installing Xen on CentOS 7.x

Follow these steps to install Xen Hypervisor environment:

1. Enable CentOS Xen Repository

sudo yum -y install centos-release-xen

2. Update kernel and install and Xen:

sudo yum -y update kernel && sudo yum -y install xen

3. Configure GRUB to start Xen Project

Because the hypervisor starts before your operating system we need to change how the system boot process is setup:

sudo vi /etc/default/grub

Change memory amount for Domain0 to match your memory allocated.

GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=2048M,max:4096M cpuinfo com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all"

4. Run grub-bootxen.sh script to make sure grub is updated /boot/grub2/grub.cfg

bash `which grub-bootxen.sh`

Confirm the values have been modified:

grep dom0_mem /boot/grub2/grub.cfg

5. Reboot your server

sudo systemctl reboot

6. Once you reboot, verify that the new kernel is running with:

# uname -r

7. Verify that Xen is running using:

# xl info
host : xen.example.com
release : 3.18.21-17.el7.x86_64
machine : x86_64
nr_cpus : 6
max_cpu_id : 5
nr_nodes : 1
cores_per_socket : 1
threads_per_core : 1
.........................................................................

Deploy first VM

At this point, you should be ready to bring up your first VM. In this demo, I’ll use virt-install deploy a VM on Xen.

sudo yum --enablerepo=centos-virt-xen -y install libvirt libvirt-daemon-xen virt-install
sudo systemctl enable libvirtd
sudo systemctl start libvirtd

The HostOS install in Xen is known as Dom0. Virtual Machines (VMs) running via Xen are known as DomU’s.

virt-install -d \
--connect xen:/// \
--name testvm \
--os-type linux \
--os-variant rhel7 \
--vcpus=1 \
--paravirt \
--ram 1024 \
--disk /var/lib/libvirt/images/testvm.img,size=10 \
--nographics -l "http://192.168.122.1/centos/7.2/os/x86_64" \
--extra-args="text console=com1 utf8 console=hvc0"

If you would like to control DomU VMs using graphical application, consider installing virt-manager

sudo yum -y install virt-manager

Src:
https://computingforgeeks.com/xen-virtualization-in-linux/?utm_content=bd-true

Share

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.