Partitioned Scheduling for DAG Tasks Considering Probabilistic Execution Time

Partitioned Scheduling for DAG Tasks Considering Probabilistic Execution Time
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Autonomous driving systems, critical for safety, require real-time guarantees and can be modeled as DAGs. Their acceleration features, such as caches and pipelining, often result in execution times below the worst-case. Thus, a probabilistic approach ensuring constraint satisfaction within a probability threshold is more suitable than worst-case guarantees for these systems. This paper considers probabilistic guarantees for DAG tasks by utilizing the results of probabilistic guarantees for single processors, which have been relatively more advanced than those for multi-core processors. This paper proposes a task set partitioning method that guarantees schedulability under the partitioned scheduling. The evaluation on randomly generated DAG task sets demonstrates that the proposed method schedules more task sets with a smaller mean analysis time compared to existing probabilistic schedulability analysis for DAGs. The evaluation also compares four bin-packing heuristics, revealing Item-Centric Worst-Fit-Decreasing schedules the most task sets.


💡 Research Summary

The paper addresses the challenge of providing real‑time guarantees for autonomous driving systems, whose computational workloads are naturally represented as directed acyclic graphs (DAGs). Modern automotive hardware (e.g., deep caches, aggressive pipelining) often executes tasks considerably faster than the worst‑case execution time (WCET) traditionally used in hard‑real‑time analysis, leading to overly pessimistic response‑time bounds. To mitigate this, the authors adopt a probabilistic execution‑time model, where each node (subtask) of a DAG is characterized by a probabilistic WCET (pWCET) distribution.

First, the system model is defined: a set of homogeneous cores runs a task set Γ = {τ₁,…,τₙ}. Each DAG task τᵢ = (Gᵢ, Tᵢ, Dᵢ, ρᵢ) consists of a graph Gᵢ = (Vᵢ, Eᵢ), a period Tᵢ, a relative deadline Dᵢ (with Dᵢ ≤ Tᵢ), and a deadline‑failure probability threshold ρᵢ. Nodes vᵢ,ⱼ ∈ Vᵢ have independent discrete pWCET distributions Cᵢ,ⱼ. By convolving all node distributions (operator ⊗), the total execution‑time distribution Cᵢ for the whole DAG is obtained.

Because tasks may be aborted when they miss their deadline, the authors introduce an “adjusted” total‑execution distribution C′ᵢ that caps any execution time exceeding Dᵢ at the deadline value. From C′ᵢ they compute an adjusted mean utilization (\bar U’_i = E


Comments & Academic Discussion

Loading comments...

Leave a Comment