• Keine Ergebnisse gefunden

Dealing with Shared Resources in Virtualized Data Centers

2.3 Operating Systems, IT Services, and Software

3.1.4 Dealing with Shared Resources in Virtualized Data Centers

This section provides some background on how virtualization environments shares the resource capacity of servers between different VMs. This knowledge is an essential prerequisite to understand how shared resources are modeled in the resource management concept.

Mainly three types of resources are shared between VMs. They will be discussed one by one in the following.

CPU Time

Modern servers contain several CPUs. Each modern CPU has typically more than one core.

All cores of all CPUs form the CPU capacity of a server. Typically, one core of one CPU is reserved for the VMM itself so that the overall CPU capacity provided to VMs is reduced by one core.

VMs are consuming CPU time. CPU time is a measure that indicates how long a certain VM uses or requires one core of one CPU. In general, CPU time is not used as an absolute value but related to a fixed measuring interval and expressed in percent in most cases3.

VMs can provide one or more virtual CPUs to their guest operating system. The number of virtual CPUs of all VMs that are running on the same server can exceed the number of real hardware cores present. In this case, different virtual cores are mapped onto the same hardware core. The CPU time is scheduled between them. One virtual core can provide the whole computing power of one real hardware core to its guest OS at a maximum.

Common virtualization environments allow specifying upper and lower limits to control the distribution of CPU time to virtual CPUs that are mapped on the same core. A lower limit guarantees minimal CPU time to the respective virtual CPU. The maximal CPU time a virtual CPU can provide to its guest OS can be defined by an upper limit. Setting such limits helps to isolate different VMs from each other. If a service exceeds the expected CPU time demand

3Sometimes MHz is used as unit for CPU time. This unit can be regarded as CPU cycles per second. Knowing the duration of one CPU cycle one can translate this unit into percent as well.

18

3.1 Technical Background

because of failures or an attack, the performance of none of the other services that are running on the same server will be influenced.

The first virtualization environments supported only an fixed assignment of virtual CPUs to real cores that must be manually set by the administrator. Modern ones such asVMware ESX Server and Citrix XenServer (when the Credit-Based CPU-Scheduler[108] is used) integrate load balancing schedulers that dynamically reassign the virtual CPUs according to current workload conditions. A big advantage of this dynamic assignment is that all cores of all CPUs can be regarded as one big pool of CPU time capacity. All virtual CPUs can individually take CPU time out of this pool as long as the sum of all does not exceed the overall capacity. If for instance three virtual CPUs require 40%, 80%, and 70% CPU time, this capacity can be provided by two real cores. The scheduler continuously reassigns the three virtual CPUs to the two real cores so that the CPU time provided to the virtual CPUs fits on average.

Of course, the accuracy the CPU time is scheduled to the virtual CPUs depends on the time interval that is regarded. Conventional schedulers have rescheduling periods of the virtual CPUs of below 50ms (XenServer: 30ms). Hence, actually used CPU time should not signifi-cantly deviate from the specified values any more already after a few multiples of this period.

Running applications on virtual servers that require provisioning in smaller times scales is a big challenge at all because any kind of resource scheduling must be able to deal with these small deadlines. This issue will be not addressed any deeper in this thesis. It must be mainly addressed by the virtualization environment and the underlying schedulers.

RAM

Memory capacity is allocated by the VM in different ways depending on the virtualization environment used. Citrix Xen Server, for instance, allocates the complete memory that is assigned to a VM directly when the VM is started [109]. The amount of assigned memory can be changed by the VMM at runtime using a technique called ballooning. But the VMM is not aware of the amount of memory that is actually used by the guest OS in a VM. Hence, automatically adjusting provided memory capacity to the demand of a VM is not possible without any additional communication to the operating system. The resource management must allocate a fixed amount of memory capacity that satisfies the maximal demand of the VM ever expected in the future as a consequence.

Full virtualization environments such as VMware ESX Server provide special hardware driver for the simulated hardware components as mentioned in Section 3.1.2. These drivers are installed in the guest OS and hence enable a kind of special communication between the guest OS and the VMM. The VMM knows about the amount of memory capacity actually required by the guest OS at any time. Hence, it does not have to allocate the whole amount of memory assigned to a VM directly when the VM is started. It can allocate memory when

3 Problem Statement

it is needed. Especially memory that is not needed any more by the guest OS can be released at runtime and used by other VMs placed on the same server.

In principal, both kinds of virtualization techniques allow overbooking the memory. The virtual memory capacity provided to all VMs can exceed the capacity of the server. Swap-ping files are used to compensate the missing memory capacity. But possible performance losses caused when memory pages must be reloaded from the swap file are hardly predictable [109]. Hence, the resource management concept presented in this thesis will not draw on this technique.

Network

The concept of virtual network adapters is widely distributed in most common server virtual-ization environments. One or more virtual network adapters can be assigned to a VM. Each of them has its own MAC address. Hence, they look like physical ones from an outside view.

The direct assignment of physical network adapters to VMs is not supported in most cases.

Instead, the VMM provides methods to connect virtual network adapters with physical ones.

Citrix Xen Server introduced the bridging concept [110] already known from the Linux oper-ating system for this purpose. Different virtual and physical network adapters can be freely connected to each other, which allows a very flexible configuration. A similar approach is followed by the VMware ESX Server. They introduced virtual switches (called vNetwork Standard Switches inVMware vSphere 4.X) that are a software implementation of a hardware switch [120]. Virtual as well as physical network adapter can be connected to these virtual switches. Different of them can be instantiated in a VMM.

The capacity (throughput) of a virtual network adapter typically is much higher compared to a physical one. It is mainly limited by the CPU capacity provided to the VM and to the underlying VMM. But the virtual adapter is not the limiting resource in most cases. Different physical components contained in the server and especially the network infrastructure behind can form the bottle neck depending on the destination of the network traffic. Hence, network capacity provided to the VM depends on all components of the network infrastructure (physical and virtual) but also on the destination of the traffic itself.

A further challenge in virtualization based data centers arises when live migration of VMs must be supported. Older server virtualization environments (e.g. VMware ESX 3.5 and XenServer) require that the same virtual bridges and virtual switches are instantiated on all possible destination server of a VM. The virtual network adapters can be simply connected to them on each of the servers this way. All of these duplicated virtual switches must be connected to the same underlying physical network. Moving a virtual adapter from one switch to another is not a problem for open connection due to the individual MAC and IP address.

Respective network packets are simply routed to their new position by the switches.

20

3.1 Technical Background

VMware recently introduced a more holistic network concept into its newest server virtual-ization environmentVMware vSphere 4.X. So calledvNetwork Distributed Switches [121] have been introduced in addition to the vNetwork Standard Switches. They function as a single switch across different servers. They better support VM migration and load balancing in the whole network compared to the local switches.