• Keine Ergebnisse gefunden

3.3 System Description

3.3.2 Limited Resources

Resource demand of VMs was regarded in a very abstract way until now. A closer look to the different resources types will be provided within this section to understand, which ones need to be considered by the resource management concept. Generally, limited resources for services in a data center are CPU time, network bandwidth, memory (RAM), and disk space.

In the following they will be discussed one by one.

CPU Time

CPU time provided to VMs is a resource that generally is limited by the capacity of the server on which the VMs are running. All VMs placed on the same server must share its capacity as discussed in Section 3.1.4. Hence, CPU time must be considered by the resource management concept.

3 Problem Statement

It has been further detailed in Section 3.1.4 that the capacity of different CPUs contained in one server can be handled like one big resource pool of CPU time. This means that one single value can describe the CPU capacity of a whole server w.r.t. all contained CPUs and cores.

The CPU time consumed by a VM is varying over time and can be split between different virtual CPUs assigned to the VM. One virtual CPU can completely consume the time of one core of a hardware CPU at a maximum. The CPU time consumed by all virtual CPUs of one VM can be summed up to get the overall CPU time taken from the servers capacity due to the dynamic mapping of virtual CPUs to cores of real CPUs.

A VM takes exactly the part of CPU time it needs as long as enough CPU time capacity is provided. This amount of taken CPU time will be denoted by CPU time demand of the VM within this thesis. A service will typically not completely fail, when this demand exceeds the CPU time actually provided by the virtualization environment. It will only get slower. Hence, provided CPU time can be adjusted to the demand of a VM to trade off service performance against resource capacity.

The CPU time required to meet a performance goal of a service can vary depending on the CPU type and the main board architecture of the server. Hence, not only the resource demand but also the server on which the VM is currently placed decide about the CPU time needed to meet a certain SLO. Such heterogeneities are not addressed within this thesis. It is assumed that providing a fixed amount of CPU time to a VM will lead to the same performance independent from the server it is deployed on.

Network Bandwith

An overview of common network concepts that support server virtualization has been given in Section 3.1.4. It has been pointed out that distributing VMs to servers only considering their required bandwidth will fail in many cases. Additional information about the underlying network (virtual as well as physical) and the source or destination of the traffic itself must be taken into account.

In general, one would not place two VMs that require a bandwidth of 80MBit/s together on a server that is physically connected to the network via a 100MBit/s network adapter. But if the whole traffic is exclusively caused by communication between these two VMs, placing them together at a server might be the best solution at all. The traffic is routed directly through the local virtual net of the host, which is quiet faster and less expensive in terms of required resources. Hence, a more detailed view into the services implemented in the VMs and knowledge about the network concept behind are required.

It is expected that the administrator has completely worked out the network concept for the VMs that are controlled by the LPM system. This concept defines restrictions for the resource management concerning VMs that must, can, or must not be placed together on the

28

3.3 System Description

same server. Furthermore, a list of invalid hosts is defined for each VM on which they must not be placed at all. The administrator can at least prevent that two traffic intensive VMs are using the same physical network adapter this way. Furthermore, servers can be excluded that do not provide appropriate network connections to a VM. It is assumed that no SLA violation are caused by the network system, when these restrictions are considered by the resource management.

Additional restrictions concerning the actual network traffic caused by the VMs might be needed for resource management decisions as well. VMs that are connected to their clients via a physical network adapter must share its bandwidth. This should be considered as well. But a possible bottle-neck might not be the adapter itself but the network infrastructure behind.

Hence, the whole infrastructure must be regarded as well, which requires modeling the behavior of all components involved and the traffic of the services implemented in the VMs.

This complex research area is not worked out any more within this thesis. It must be part of required future work. It will be shortly outlined in the outlook section of the conclusion chapter, how appropriate network models can be integrated into the resource management concept presented in this thesis. An interface can connect a holistic concepts for network management (such as provided by VMware vSphere 4.X) with the LPM component to also consider network aspects.

Memory (RAM)

The memory capacity is limited per server comparable to CPU time. Different servers can have different memory capacities. All VMs placed on the same server must share its memory capacity. Hence, memory must be considered by the resource management concept as well.

But in contrast to CPU time, the service implemented in a VM will completely fail, when the memory demand exceeds the provided capacity4. Furthermore, memory demand of a VM can either vary over time or can be a fixed value at runtime depending on the virtualization environment as pointed out in Section 3.1.4.

Disk Space and Disc IO Operations

The storage system is centralized as discussed in the previous section. Hence, the disk space provided to each VM is independent from the distribution of VMs. The number of active servers does not influence disk space as well. Hence, this resource type does not have to be considered by the resource management concept as limited resource.

4As discussed in 3.1.4, swapping techniques allow exceeding the capacity by swapping out parts of the memory to disk. Because of hardly predictable performance loss when memory content must be restored, the concept presented in this thesis will not draw on this technique.

3 Problem Statement

The maximal throughput of storage I/O operations is typically limited by the storage system itself but not by the connection between the storage and the server. Hence, the distribution of VMs to servers does not influence the response time and throughput of storage I/O operations.

Modern virtualization environments typically support storage I/O prioritization that based on shares and limits handles I/O requests of different VMs [126]. This storage access management works cluster-wide and hence is independent from the positions of VMs and the number of the VMs placed on the same server.