Reducing Total Power Consumption Method in Cloud Computing Environments
The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.
💡 Research Summary
The paper addresses the rapidly growing power consumption of information and communication technology (ICT) equipment in cloud computing environments, emphasizing that isolated optimization of servers or network devices is insufficient for achieving substantial energy savings. Instead, the authors propose a holistic collaboration framework that integrates three critical layers: the computing servers, the communication network, and the electric power grid.
Five fundamental collaboration policies are introduced.
- Server‑Network Load Coupling Optimization – Aligns active servers with nearby network switches and routers, allowing idle network segments to enter low‑power or sleep modes.
- Power‑Grid‑Aware Dynamic Resource Allocation – Continuously monitors real‑time grid capacity and reallocates workloads to power‑efficient server pools when the grid is constrained, while exploiting surplus power zones for high‑performance processing.
- Traffic‑Prediction‑Based Power Scheduling – Utilizes machine‑learning models to forecast short‑term traffic (5–10 minutes ahead) and proactively powers on or off line‑cards and ports according to the predicted load.
- Integrated Power Monitoring and Feedback Loop – Aggregates instantaneous power usage from all devices into a central controller, which then drives real‑time load redistribution and power‑saving actions.
- User‑Level Power Cost Transparency – Maps the power consumption of each network element to the traffic volume and path weight of individual users, enabling per‑user billing and incentive mechanisms for energy‑conscious behavior.
To realize these policies, the authors formulate a multi‑objective optimization problem that simultaneously minimizes total power, respects service‑level agreement (SLA) latency constraints, and satisfies grid‑capacity limits. The problem is expressed as a mixed integer linear program (MILP) with variables for server activation, task placement, routing decisions, and device power states. Because exact MILP solutions are impractical at data‑center scale, a hybrid heuristic combining genetic algorithms with simulated annealing is proposed, allowing near‑optimal solutions within seconds.
A four‑step signaling protocol is defined to exchange power‑related information between servers and network devices. The steps are: (1) power‑state reporting, (2) load‑prediction transmission, (3) collaboration‑policy request, and (4) acknowledgment of applied actions. This protocol is designed to be carried over existing Software‑Defined Networking (SDN) control channels (e.g., OpenFlow) by appending custom TLVs, ensuring backward compatibility.
For the network‑side power allocation, the paper suggests a two‑phase method. First, each port and line‑card is profiled to obtain a static power‑per‑bit metric. Second, real‑time traffic matrices are multiplied by these metrics along the selected paths, yielding a per‑user power consumption estimate without requiring detailed device‑level power modeling.
Simulation experiments using a realistic data‑center topology, synthetic workloads, and a time‑varying grid supply model demonstrate an average total power reduction of 18 % compared with conventional server‑only or network‑only optimization approaches. SLA violation rates remain below 0.3 %, confirming that performance is not sacrificed. The user‑level cost transparency mechanism further encourages traffic shaping that aligns with energy‑saving goals.
In conclusion, the study provides a concrete, implementable roadmap for achieving system‑wide energy efficiency in cloud environments through coordinated actions across computing, networking, and power infrastructures. The proposed policies, algorithms, signaling scheme, and power‑allocation technique are compatible with current commercial data‑center and carrier equipment, and they can be extended to future grids with higher renewable‑energy penetration. Future work is suggested in three areas: (i) refining traffic prediction accuracy with deep‑learning models, (ii) incorporating long‑term grid variability into scheduling decisions, and (iii) designing market‑based incentive structures that reward users for reducing their allocated power footprint.