PolarSPARC

Introduction to Linux Virtualization using KVM


Bhaskar S 12/28/2018


Overview

A Hypervisor (also known as the Virtual Machine Manager) is a computer supervisor software that allows one to create and run one or more Virtual Machines (running their own operating system). The host on which the hypervisor runs is referred to has the Host machine, while each of the virtual machine(s) are referred to as the Guest machine(s). The hypervisor gives each virtual machine the illusion that it is running on its own physical hardware through virtualization of the underlying physical hardware resources, such as the disk, network, video, etc.

There are two types of hypervisors:

KVM (short for Kernel-based Virtual Machine) is an open source Linux kernel module that only works on the x86 hardware platform containing the virtualization extensions (Intel VT and AMD-V) and makes the Linux operating system behave like a Type-1 hypervisor.

QEMU (short for Quick EMU lator) is a generic, open-source, standalone, software-based, full system emulator and virtualizer. As an emulator, it supports emulation of different target computing platforms such as arm, powerpc, sparc, etc. Full target system emulation is performed using an approach called Dynamic Binary Translation that translates the target processor's opcode to a compatible host processor opcode. As a virtualizer, it is a Type-2 hypervisor, running in the user-space of the Linux operating system, performing virtual hardware emulation.

Now, a question may pop in one's mind - what is the relationship between KVM and QEMU ???

As indicated earlier, KVM is a kernel module and resides in the Linux kernel space. It provides an interface to create and run virtual machines. There needs to be some entity in the Linux user space that interacts with KVM for provisioning and managing virtual machines. That is where QEMU, which resides in the Linux user space, comes into the picture.

Note that QEMU is a standalone piece of software and can provide virtualization on its own (without KVM). However, it would perform poorly and slow due to the software emulation.

The following Figure-1 illustrates the high-level architecture view of KVM with QEMU:

QEMU-KVM
Figure-1

Pre-requisites

The installation, setup, and tests will be on a Ubuntu 18.04 (bionic) LTS based Linux desktop.

ATTENTION: AMD Ryzen Users

Ensure virtualization (SVM) is *Enabled* in the BIOS. In ASRock AB350M Pro4 virtualization is disabled by default

Open a Terminal window as we will be executing all the commands in that window.

Check to sure that the desktop CPU has the virtualization extensions enabled by executing the following command:

$ egrep '(vmx|svm)' /proc/cpuinfo

The following would be a typical output on an Intel based platform:

Output.1

flags    : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap intel_pt xsaveopt dtherm ida arat pln pts
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap intel_pt xsaveopt dtherm ida arat pln pts
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap intel_pt xsaveopt dtherm ida arat pln pts
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap intel_pt xsaveopt dtherm ida arat pln pts

Next, check the KVM kernel module is loaded by executing the following command:

$ lsmod | grep kvm

The following would be a typical output on an Intel based platform:

Output.2

kvm_intel             204800  0
kvm                   593920  1 kvm_intel
irqbypass              16384  1 kvm

On most modern desktops running Linux, the KVM kernel module should be loaded by default.

Installation

In order to create and manage virtual machine(s), we need to install QEMU along with some additional software tools.

To install QEMU and the additional tools, execute the following command:

$ sudo apt-get install qemu-kvm libvirt-bin virt-manager bridge-utils

The qemu-kvm package installs all the necessary system binaries related to QEMU.

The libvirt-bin package installs the software pieces that include a virtualization API library, a daemon (libvirtd), and a command line utility called virsh. The primary goal of libvirt is to provide an unified way to manage the different hypervisors.

The virt-manager package installs a graphical interface tool to create and manage virtual machines on KVM. It is a Python based GUI tool that interacts with libvirt.

The bridge-utils package installs a utility that will be used to create and manage bridge network devices.

Setup 1

Assume we are logged in as the user polarsparc with the home directory located at /home/polarsparc.

Create a directory called VMs under /home/polarsparc/Downloads by executing the following command:

$ mkdir -p /home/polarsparc/Downloads/VMs

Now, we need to enable the libvirt daemon libvirtd. To do that, execute the following command:

$ sudo systemctl enable libvirtd

The following would be a typical output:

Output.3

Synchronizing state of libvirtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable libvirtd

Next, we need to start the libvirt daemon. To do that, execute the following command:

$ sudo systemctl start libvirtd

To check the status of the libvirt daemon, execute the following command:

$ systemctl status libvirtd

The following would be a typical output:

Output.4

libvirtd.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2018-12-27 17:04:58 EST; 3min 24s ago
     Docs: man:libvirtd(8)
           https://libvirt.org
 Main PID: 12495 (libvirtd)
    Tasks: 17 (limit: 32768)
   CGroup: /system.slice/libvirtd.service
           |--12495 /usr/sbin/libvirtd

Hands-on with KVM 1

For our tests, we will download the latest version of Ubuntu 18.04 desktop ISO image from www.ubuntu.com to the directory /home/polarsparc.

Before we can create a virtual machine in KVM, we need to allocate storage for the virtual guest machine. For our tests, the storage will be on a file (called the disk image) on the local disk of the host machine. The storage disk image can be in one of the following two formats:

For our tests, we will use the raw format.

To create a raw storage disk image of size 16G named vm-disk-1.raw under the directory /home/polarsparc/Downloads/VMs, execute the following command:

$ qemu-img create -f raw /home/polarsparc/Downloads/VMs/vm-disk-1.raw 16G

The following would be a typical output:

Output.5

Formatting '/home/polarsparc/Downloads/VMs/vm-disk-1.raw', fmt=raw size=17179869184

To create a new virtual guest machine in KVM using the storage disk image just created, execute the following command:

$ sudo qemu-system-x86_64 -cdrom /home/polarsparc/ubuntu-18.04.1-desktop-amd64.iso -drive format=raw,file=/home/polarsparc/Downloads/VMs/vm-disk-1.raw -enable-kvm -m 2G -name vm-ubuntu-1

Since we are creating a virtual guest machine based on the 64-bit x86 architecture, we are using the qemu-system-x86_64 command. There are different qemu-system-* commands for the different CPU architectures. The option -cdrom specifies the full path to the Ubuntu ISO. The option -drive specifies the format and full path to the storage disk image. The option -enable-kvm enables the KVM mode for virtualization. The option -m indicates the amount of memory allocated to the virtual guest machine, which is 2G in our case. Finally, the option -name specifies a name for the virtual guest machine.

The following would be a typical output:

Output.6

qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]

Ignore the warning message. This will launch a new window, prompting the user with various options for completing the Ubuntu installation in the virtual guest machine. Once the installation completes, close the window.

To launch the newly created Ubuntu based virtual guest machine, execute the following command:

$ sudo qemu-system-x86_64 -drive format=raw,file=/home/polarsparc/Downloads/VMs/vm-disk-1.raw -enable-kvm -m 2G -name vm-ubuntu-1

The following would be a typical output:

Output.7

qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]

Ignore the warning message. This will launch a new window and start the Ubuntu based virtual guest machine.

By default, a virtual guest machine is created with NAT networking. This means the virtual guest machine can communicate with the internet (the outside world), but cannot communicate with other virtual guest machines or the host machine and vice versa. Even the basic ping command will fail since ICMP is disabled as illustrated in Figure-2 below:

Ping Fail
Figure-2

If we desire that the virtual guest machine(s) be able to communicate both internal and external, we need to setup a bridge network on the host machine.

Shutdown the virtual guest machine so we can re-launch it later.

Setup 2

To check the current contents of the file /etc/network/interfaces, execute the following command:

$ cat /etc/network/interfaces

The following would be a typical output:

Output.8

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

Modify the contents of the file /etc/network/interfaces with the following content:

/etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

# Bridge network
auto br0
iface br0 inet dhcp
        bridge_ports enp2s0
        bridge_fd 0
        bridge_stp off
        bridge_maxwait 0

Note you will need to sudo in order to modify the contents of the system config file /etc/network/interfaces.

We are setting a new bridge network called br0 using the wired ethernet interface enp2s0. Once caveat - the bridge network will *NOT* work with a wireless interface.

We need to restart the networking stack since we added a new bridge network. To do that, execute the following command:

$ sudo /etc/init.d/networking restart

The following would be a typical output:

Output.9

[ ok ] Restarting networking (via systemctl): networking.service.

To check the status of the bridge network, execute the following command:

$ brctl show

The following would be a typical output:

Output.10

bridge name  bridge id   STP enabled interfaces
br0   8000.f0761cdb285f no        enp2s0

To display the IP link layer information, execute the following command:

$ ip link show

The following would be a typical output:

Output.11

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp2s0:  mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000
    link/ether aa:bb:cc:dd:ee:ff brd ff:ff:ff:ff:ff:ff
3: wlp3s0:  mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
    link/ether 11:22:33:44:55:66 brd ff:ff:ff:ff:ff:ff
6: br0:  mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f0:e1:d2:c3:b4:a5 brd ff:ff:ff:ff:ff:ff

The bridge network has been successfully setup on the host machine.

Hands-on with KVM 2

To re-launch the Ubuntu based virtual guest machine (vm-ubuntu-1) with the bridge network, execute the following command:

$ sudo qemu-system-x86_64 -drive format=raw,file=/home/polarsparc/Downloads/VMs/vm-disk-1.raw -enable-kvm -m 2G -name vm-ubuntu-1 -net bridge -net nic,model=virtio,macaddr=00:00:00:00:00:01

The following would be a typical output:

Output.12

failed to parse default acl file `/etc/qemu/bridge.conf'
qemu-system-x86_64: -net bridge: bridge helper failed

Huh ??? What happened here ???

We intentinally missed a step in Setup 2 after creating the bridge network to illustrate this error.

We need to create the directory /etc/qemu. To do that, execute the following command:

$ sudo mkdir -p /etc/qemu

Next, we need to create the file /etc/qemu/bridge.conf with the contents allow br0. To do that, execute the following command:

$ echo 'allow br0' | sudo tee -a /etc/qemu/bridge.conf

Now we should be able to launch the Ubuntu based virtual guest machine (vm-ubuntu-1) with bridge networking by executing the following command:

$ sudo qemu-system-x86_64 -drive format=raw,file=/home/polarsparc/Downloads/VMs/vm-disk-1.raw -enable-kvm -m 2G -name vm-ubuntu-1 -net bridge -net nic,model=virtio,macaddr=00:00:00:00:00:01

Ignore the warning message. This will launch a new window with the Ubuntu based virtual guest machine (vm-ubuntu-1) started.

Login to the Ubuntu based virtual guest machine, launch a Terminal window, and we will be able to ping both the internal and external network as illustrated in Figure-3 below:

Ping Okay
Figure-3

We will notice that the launched virtual guest machine window is of a lower resolution (probably 640x480). To fix this we should specify the -vga option with the value virtio. In addition, we may want to specify the -daemonize option so the launched virtual guest machine is detached from the Terminal that launched the command.

The modified command-line would be as follows:

$ sudo qemu-system-x86_64 -drive format=raw,file=/home/polarsparc/Downloads/VMs/vm-disk-1.raw -enable-kvm -m 2G -name vm-ubuntu-1 -net bridge -net nic,model=virtio,macaddr=00:00:00:00:00:01 -vga virtio -daemonize

As we can see, the command-line is getting very complicated and messy. This is where the GUI tool virt-manager comes in handy, hiding all the complexities behind the scenes.

Setup 3

Before we start leveraging virt-manager, we have one more setup step to complete.

Execute the following command:

$ sudo apt-get install gir1.2-spiceclientgtk-3.0

This package (and its dependents) are needed for the GUI tool virt-manager to connect to any launched virtual guest machine(s).

CAUTION

Not installing the spiceclientgtk package will result in the following error from virt-manager:
Error connecting to graphical console: Error opening Spice console, SpiceClientGtk missing

Hands-on with KVM 3

Launch virt-manager by executing the following command:

$ virt-manager

This will launch the virtual machine management GUI tool as illustrated in Figure-3 below:

virt-manager
Figure-3

Select QEMU/KVM and then click on the computer icon (with a sparkle) to create a new virtual guest machine.

Select the first Local install media option as illustrated in Figure-4 below:

New Virtual Machine
Figure-4

Select the second Use ISO image option as well as Automatically detect option as illustrated in Figure-5 below:

ISO Image
Figure-5

By default, virt-manager looks for images in the directory /var/lib/libvirt/images. We need to browse to our local directory /home/polarsparc to select our Ubuntu ISO image. Click on Browse local to select the ISO image as illustrated in Figure-6 below:

Browse ISO
Figure-6

Select the Ubuntu ISO image and then click on Open as illustrated in Figure-7 below:

Select ISO
Figure-7

Having selected the Ubuntu ISO image, click on Forward to move to the next step as illustrated in Figure-8 below:

Forward
Figure-8

Change the value of Memory (RAM) to 2048, leave the value of CPUs at 1, and then click on Forward as illustrated in Figure-9 below:

Select Memory CPU
Figure-9

Check the Enable storage option, ensure Select or create custom storage is selected, choose the directory location /home/polarsparc/Downloads/VMs for the storage disk image, enter the storage disk image name vm-disk-3, and then click on Forward as illustrated in Figure-10 below:

Select Storage
Figure-10

Enter the virtual guest machine name of vm-ubuntu-3, select Bridge br0 under Network selection, and then click on Finish as illustrated in Figure-11 below:

Select Network
Figure-11

This will launch a new window, prompting the user with various options for completing the Ubuntu installation in the virtual guest machine. The Ubuntu installation will progress as illustrated in Figure-12 below:

OS Installation
Figure-12

Once the installation completes and the virtual guest machine reboots, one will be able to operate in the virtual guest machine as illustrated in Figure-13 below:

Guest Machine
Figure-13

Shutting down the virtual guest machine vm-ubuntu-3 closes the guest machine window and virt-manager will reflect the status of virtual guest machine vm-ubuntu-3 as illustrated in Figure-14 below:

VM Shutdown
Figure-14

As is obvious, working with virt-manager to create and manage virtual guest machines using KVM is a breeze.

References

KVM/Installation

QEMU version 2.11.1 User Documentation

libvirt Documentation



© PolarSPARC