Understanding Hardware-Assisted Virtualization
Chip manufacturers designed the x86-based CPU architecture around the concept of a single operating system per server system. Even multiple CPU systems were oriented toward boosting the performance and efficiency of the single operating system per server model. In the mid-1990s however, this attitude began to change with the pioneering work of Kevin Lawton on his Bochs project. His deep dive into x86 architecture and open-standards release of Bochs allowed others to build upon the concept of multiple operating systems per server. The Bochs project includes emulation of the Intel x86 CPU, common I/O devices, and a custom BIOS. From this project’s original research and design came what we now know and enjoy as contemporary x86 virtualization. However, the software-only virtualization solutions used in these early x86 virtualization systems did not provide the level of performance we would expect today for production systems. As early as 1974, computer scientists Gerald Popek and Robert Goldberg realized that hardware-assisted virtualization was the key to leveraging a single hardware collection (server) as the basis for robust, business-capable virtual systems. They formalized a set of requirements for a computer architecture to support system virtualization in their article, Formal Requirements for Virtualizable Third Generation Architectures. This landmark paper defined three properties for a virtual machine monitor (VMM):
- Efficiency — All innocuous instructions are executed by the hardware directly, with no intervention at all on the part of the control program.
- Resource Control — It must be impossible for that arbitrary program to affect the system resources, i.e. memory, available to it; the allocator of the control program is to be invoked upon any attempt.
- Equivalence — Any program executing with a control program resident, with two possible exceptions, performs in a manner indistinguishable from the case when the control program did not exist, with whatever freedom of access to privileged instructions that the programmer had intended.
Until the mid-2000s, x86 architecture did not meet these requirements. Today, both Intel and AMD provide chips that reach closer to this ideal through hadware-assisted virtualization. The need for hardware-assisted virtualization results from the limitations implicit in software-based virtualization. One of the principle problems with managing virtualization through the software alone is that the x86 architecture uses the concept of privilege levels (or privilege rings) for machine instructions. The most privileged operations, which are reserved for the host operating system, have a privilege level of 0. A virtual system running on top of the host can’t access the 0 privilege level directly and therefore instructions passed down to the host much undergo a time-consuming conversion known as ring deprivileging. Although some ingenious techniques have developed through the years for passing privileged instructions to the host, even in the best case, this technique incurs significant system overhead. Paravirtualization emerged as a technique for minimizing this overhead by providing an API with the hypervisor that the guest can use for privileged operations, but paravirtualization adds additional complexity by requiring modifications to the guest system — either within the actual source code or on-the-fly at the binary level. Hardware virtualization reduces the involvement of the host system in managing privilege and address space translation issues. Intel’s VT-x virtualization extensions provide better performance and a fuller range of hardware-based functions without modification of the guest system or other complications. The VT-i extensions provide similar functionality for Intel Itanium systems, and AMD’s AMD-V technology brings a similar range of features to AMD chips. Support hardware-assisted virtualization, allows simpler and smaller hypervisor code and near-native performance for virtual machines. Hardware-assisted virtualization provides three key performance enhancements over software-based solutions:
- Faster transfer of platform control between guest OSs and the VMM.
- Secure assignment of specific I/O devices to guest OSs.
- An optimized network for virtualization with adapter-based acceleration.
These enhancements result in lower CPU utilization, reduced system latency, and improved networking and I/O throughput. Although VT and AMD-V technologies have been around for several years now, they are not universally supported on all PC chips. Intel’s “VT Technology List” provides a summary of which Intel chips support VT virtualization (http://ark.intel.com/VTList.aspx). As of now, AMD does not appear to provide a similar list of virtualization-ready chips, but you can download a free utility to check whether your current system will support hardware virtualization. If you are shopping for a new x86 system, and think you might have a need to virtualize, check the specs to ensure that the processor supports Intel VT or AMD-V.
From an in the trenches perspective, how does all of this history and theory translate into the data center? The system administrator must know that the server hardware fully supports virtualization. By fully supports, the hardware must have four capabilities to qualify as a hypervisor. All four hardware capabilities must be met.
- 64-bit Multi-core CPUs.
- Intel VT or AMD-V CPU virtualization extensions.
- No eXecute (NX)/eXecute Disable (XD).
- Full BIOS support for hardware virtualization.
You’ll find these options in the system’s BIOS, but unfortunately, locations of the configurable options vary depending on the motherboard manufacturer. Prior to purchasing a system as a hypervisor candidate, check the motherboard manufacturer for compliance.
Installing the Hypervisor
If your hardware meets all of the prerequisites for use as a hypervisor, you have to select a hypervisor type. For an all-Linux and non-proprietary solution, Xen, Proxmox (OpenVZ and KVM combination), or KVM are your best choices for enterprise-level hypervisors. Because of its inclusion in the Linux kernel and unanimous support in Ubuntu, CentOS, and Red Hat Enterprise Linux, KVM is the hypervisor of choice for this discussion. You have three choices for installing the KVM hypervisor as a virtual machine host.
- Install virtualization as part of the OS installation (Bare metal installation).
- Install KVM and supporting packages on an existing Linux system.
- Install a preconfigured hypervisor solution (Bare metal installation).
The preferred method is to use one of the bare metal installations to ensure that your host system is clean and single-purposed as a hypervisor. For an existing Ubuntu installation, use the following command to check the existence of CPU virtualization extensions:
The processors on this system do support hardware virtualization. There are two entries; one for each CPU. Install KVM and supporting files with the command:
Or, depending on your Ubuntu version, you may optionally use:
The following extra packages will be installed:
The following NEW packages will be installed:
QEMU is part of the KVM virtualization package. The QEMU PC System emulator simulates the following peripherals:
- i440FX host PCI bridge and PIIX3 PCI to ISA bridge
- Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions (hardware level, including all non standard modes).
- PS/2 mouse and keyboard
- 2 PCI IDE interfaces with hard disk and CD-ROM support
- Floppy disk
- PCI/ISA PCI network adapters
- Serial ports
- Creative SoundBlaster 16 sound card
- ENSONIQ AudioPCI ES1370 sound card
- Adlib(OPL2) – Yamaha YM3812 compatible chip
- PCI UHCI USB controller and a virtual USB hub.
- SMP is supported with up to 255 CPUs.
KVM continues to gain popularity in the world of Linux – so much so, that it has become Red Hat and Ubuntu’s preferred virtualization solution. In contrast to Xen, setting up KVM involves just a couple of steps, and the guest operating systems can run without special patches.
The Proxmox distribution specializes in virtualization, letting you deploy and manage virtual servers with OpenVZ and KVM at the same time.
With the command-line tool virsh, a part of the libvirt library, you can query virtual machines to discover their state of health, launch or shut down virtual machines, and perform other tasks – all of which can be conveniently scripted.
In the scope of developing Fedora 20, the live snapshot function, which has long been supported by libvirt, was integrated with the graphical front end. If you prefer to avoid command-line acrobatics à la Virsh, you can now freeze your virtual KVM and Xen machines in VMM at the press of a button.
The OpenNebula cloud middleware system is one of the easiest private clouds in the sky. We’ll show you how to get started.