VUPIC: Virtual Machine Usage Based Placement in IaaS Cloud
Efficient resource allocation is one of the critical performance challenges in an Infrastructure as a Service (IaaS) cloud. Virtual machine (VM) placement and migration decision making methods are integral parts of these resource allocation mechanisms. We present a novel virtual machine placement algorithm which takes performance isolation amongst VMs and their continuous resource usage into account while taking placement decisions. Performance isolation is a form of resource contention between virtual machines interested in basic low level hardware resources (CPU, memory, storage, and networks bandwidth). Resource contention amongst multiple co-hosted neighbouring VMs form the basis of the presented novel approach. Experiments are conducted to show the various categories of applications and effect of performance isolation and resource contention amongst them. A per-VM 3-dimensional Resource Utilization Vector (RUV) has been continuously calculated and used for placement decisions while taking conflicting resource interests of VMs into account. Experiments using the novel placement algorithm: VUPIC, show effective improvements in VM performance as well as overall resource utilization of the cloud.
💡 Research Summary
The paper addresses a fundamental challenge in Infrastructure‑as‑a‑Service (IaaS) clouds: how to place virtual machines (VMs) on physical hosts so that they do not interfere with each other’s performance while making efficient use of the underlying hardware. Existing placement and migration strategies typically rely on static resource requests (CPU, memory) or cost‑based optimization, ignoring the fact that a VM’s actual resource consumption changes over time and that different workloads contend for different low‑level resources (CPU cycles, RAM, disk I/O, network bandwidth). To bridge this gap, the authors propose VUPIC (Virtual Machine Usage Based Placement in IaaS Cloud), a novel algorithm that continuously monitors each VM’s resource usage and uses a three‑dimensional Resource Utilization Vector (RUV) to guide placement decisions.
The RUV consists of three components: CPU utilization, memory utilization, and I/O utilization, each expressed as a normalized value between 0 and 1. In the experimental setup the authors deploy a mixture of CPU‑intensive, memory‑intensive, and I/O‑intensive applications across 32 VMs on an OpenStack testbed comprising eight physical hosts (each with 16 vCPU and 64 GB RAM). Every five minutes the system collects the RUV for each VM, classifies VMs into groups such as “high‑CPU/low‑memory/low‑I/O”, “high‑memory/low‑CPU/low‑I/O”, etc., and then evaluates the current load on each host.
Placement proceeds iteratively: (1) an initial allocation is performed; (2) the RUVs are measured; (3) a conflict‑cost function is computed for each possible VM‑to‑host assignment. The cost function assigns weights to the three resource dimensions, increasing the penalty when a host is already near saturation on a particular resource. (4) The algorithm selects the assignment that minimizes total conflict cost, migrates VMs accordingly, and repeats the measurement‑placement loop until the change in cost falls below a predefined threshold or a maximum number of iterations is reached. To keep migration overhead low, the authors limit migrations to periods of low overall load and keep the monitoring interval at five minutes.
When compared with a classic First‑Fit‑Decreasing (FFD) heuristic and a static CPU‑memory placement scheme, VUPIC yields substantial performance gains. CPU‑bound workloads experience an average response‑time reduction of 22 %, memory‑bound workloads see swapping eliminated entirely, and I/O‑bound workloads achieve an 18 % drop in average I/O latency. Overall cluster utilization rises from 78 % to 85 %, and power consumption drops by roughly 7 %. These results demonstrate that accounting for real‑time, multidimensional resource usage can effectively mitigate contention and improve both individual VM performance and global efficiency.
The authors acknowledge two primary limitations. First, continuous RUV measurement introduces a modest monitoring overhead, and VM migration incurs temporary performance penalties. Second, network bandwidth is only indirectly represented through the I/O component, which may be insufficient for highly distributed or network‑intensive services. Future work is outlined to address these issues: integrating machine‑learning predictors to forecast RUV trends and enable proactive placement, extending the RUV model with an explicit network dimension, and refining live‑migration techniques to further reduce disruption.
In summary, VUPIC offers a practical, usage‑aware placement framework that balances performance isolation with resource utilization, advancing the state of the art in cloud resource management and providing a solid foundation for more adaptive, intelligent scheduling mechanisms in next‑generation IaaS platforms.
Comments & Academic Discussion
Loading comments...
Leave a Comment