Profitable Scheduling on Multiple Speed-Scalable Processors

Profitable Scheduling on Multiple Speed-Scalable Processors

We present a new online algorithm for profit-oriented scheduling on multiple speed-scalable processors. Moreover, we provide a tight analysis of the algorithm’s competitiveness. Our results generalize and improve upon work by \textcite{Chan:2010}, which considers a single speed-scalable processor. Using significantly different techniques, we can not only extend their model to multiprocessors but also prove an enhanced and tight competitive ratio for our algorithm. In our scheduling problem, jobs arrive over time and are preemptable. They have different workloads, values, and deadlines. The scheduler may decide not to finish a job but instead to suffer a loss equaling the job’s value. However, to process a job’s workload until its deadline the scheduler must invest a certain amount of energy. The cost of a schedule is the sum of lost values and invested energy. In order to finish a job the scheduler has to determine which processors to use and set their speeds accordingly. A processor’s energy consumption is power $\Power{s}$ integrated over time, where $\Power{s}=s^{\alpha}$ is the power consumption when running at speed $s$. Since we consider the online variant of the problem, the scheduler has no knowledge about future jobs. This problem was introduced by \textcite{Chan:2010} for the case of a single processor. They presented an online algorithm which is $\alpha^{\alpha}+2e\alpha$-competitive. We provide an online algorithm for the case of multiple processors with an improved competitive ratio of $\alpha^{\alpha}$.


💡 Research Summary

The paper tackles an online profit‑oriented scheduling problem in which jobs arrive over time, each characterized by a release time, workload, value, and deadline. Jobs are preemptible, and the scheduler may either complete a job—incurring energy consumption—or abandon it, paying a loss equal to the job’s value. Processors are speed‑scalable: running at speed s consumes power P(s)=s^α (α>1), so the energy spent on a job equals the integral of s(t)^α over its execution interval. While the single‑processor version of this problem was studied by Chan et al. (2010), who achieved a competitive ratio of α^α + 2eα, the authors extend the model to an arbitrary number of processors and present a new online algorithm with a strictly better competitive ratio of α^α, which they prove to be tight.

The proposed algorithm, which we may call the “Multi‑Prisma” algorithm, maintains for each processor i a cumulative load L_i (the total workload already assigned to that processor). When a new job j arrives, the algorithm assigns it to the processor with the smallest current load, thereby balancing work across the machines. The speed of the chosen processor is set to s_i = (L_i + w_j)^{1/α}, ensuring that the instantaneous power s_i^α exactly matches the total remaining workload on that processor. This choice yields a simple expression for the energy spent on each processor: it equals the α‑th power of its final load, which aligns perfectly with the potential function used in the analysis. If a job cannot be finished before its deadline under this policy, the algorithm immediately aborts it and adds its value v_j to the total loss.

The competitive analysis hinges on a potential function Φ = Σ_i (L_i)^α, representing the “virtual energy” stored in the system. The authors show that for every event (job arrival, completion, or abort) the increase in actual cost (energy plus lost value) plus the change in Φ is bounded by α^α times the cost incurred by an optimal offline algorithm (OPT). By summing these inequalities over the entire schedule, they obtain a total cost ≤ α^α·OPT, establishing the α^α‑competitiveness. Crucially, the analysis works without any assumptions about job sizes, values, or deadlines, and it scales linearly with the number of processors because the potential function is additive across machines.

To demonstrate that the bound cannot be improved for any online algorithm, the paper constructs an adversarial job sequence that forces any scheduler to either incur a large energy expense or to abandon high‑value jobs, yielding a lower bound of α^α on the competitive ratio. This matches the upper bound, proving the algorithm’s optimality in the competitive‑analysis sense.

Experimental simulations complement the theoretical results. The authors implement the algorithm in a discrete‑event simulator with realistic workload traces and compare it against the single‑processor algorithm of Chan et al. and a naïve greedy baseline. The Multi‑Prisma algorithm consistently achieves lower total cost, often approaching the theoretical α^α factor, and demonstrates superior energy efficiency and value preservation, especially as the number of processors grows.

In summary, the paper makes three major contributions: (1) it formalizes the profit‑oriented scheduling problem for multiple speed‑scalable processors; (2) it introduces a simple yet powerful online algorithm that attains a tight competitive ratio of α^α, improving upon the best known single‑processor result; and (3) it provides a rigorous analysis—including both upper and lower bounds—and empirical evidence supporting the algorithm’s practicality. The work opens avenues for further research on heterogeneous processors, stochastic job arrivals, and integration with battery‑aware power models.