Model Oriented Scheduling Algorithm for The Hardware-In-The-Loop Simulation
This paper presents an approach for designing software for dynamical systems simulation. An algorithm is proposed to obtain a schedule for calculating each phase variable of a stiff system of differential equations. The problem is classified as a fixed-priority pre-emptive scheduling of periodic tasks. The Branch-and-Bound algorithm is modified to minimize the defined utilization function and to optimize the scheduling process for a numerical solver. A program for the experimental schedule is implemented solving a job-shop problem that proved the effectiveness of the proposed algorithm.
💡 Research Summary
The paper addresses the challenging problem of real‑time simulation of stiff ordinary differential equation (ODE) systems within a Hardware‑In‑The‑Loop (HIL) environment. Such simulations must handle phase variables that evolve at widely differing rates, requiring high‑frequency updates for fast dynamics and lower‑frequency updates for slower processes, all while meeting strict timing constraints imposed by the Nyquist‑Shannon sampling theorem and the need for deterministic behavior.
The authors model each phase‑variable computation as a periodic real‑time task characterized by a worst‑case execution time (C_i) and a deadline (D_i). The set of tasks is treated as a fixed‑priority pre‑emptive scheduling problem, where the overall CPU utilization U = Σ(C_i/D_i) must be kept below a feasible bound. Traditional scheduling policies—Static Cyclic Scheduling (SCS), Earliest Deadline First (EDF), Rate‑Monotonic Scheduling (RMS), and Deadline‑Monotonic Scheduling (DMS)—are reviewed. While SCS offers determinism, it lacks flexibility; EDF provides dynamic priority assignment but can suffer from overhead and unpredictability; RMS and DMS are simple but assume independent tasks with fixed periods. None of these directly satisfy the HIL requirement of minimal context‑switch overhead combined with strict deadline adherence for tasks with highly disparate periods.
To bridge this gap, the paper proposes a model‑oriented scheduling algorithm that blends EDF principles with a block‑splitting technique. High‑frequency tasks are kept as single atomic blocks, whereas low‑frequency tasks are divided into multiple sub‑blocks that can be interleaved across several real‑time cycles (RT cycles). Each block starts and ends with a context‑switch interrupt, ensuring that no low‑frequency block can overrun the deadline of a higher‑frequency block. This approach effectively transforms the original set of tasks into a set of “windows” that can be statically arranged within a single RT cycle.
The core of the optimization is a modified Branch‑and‑Bound (B&B) algorithm. The algorithm searches the space of possible block allocations to minimize the utilization function while guaranteeing that all deadlines are met. Key enhancements include:
- Early pruning when the partial utilization exceeds the best known solution.
- Immediate feasibility checks for deadline violations at each node.
- Dynamic adjustment of block sizes for tasks whose execution time exceeds a single cycle, allowing them to be split into smaller pieces.
- Incorporation of pre‑computed context‑switch overhead into each block’s execution time, ensuring realistic timing estimates.
The authors implement the algorithm in C++ and evaluate it on a classic job‑shop scheduling benchmark, where each “job” corresponds to a thread computing a subset of ODE phase variables. Compared with baseline EDF and RMS schedulers, the proposed method achieves:
- A reduction of average schedule length by more than 15 %.
- A decrease in the number of context switches by over 20 %.
- Consistent CPU utilization below 0.85, keeping the system safely within real‑time bounds.
- Preservation of all task deadlines, even when the ratio between the fastest and slowest periods spans several orders of magnitude.
The experimental results demonstrate that the algorithm not only improves computational efficiency but also maintains the deterministic timing required for HIL simulations. The authors argue that the static nature of the generated schedule (computed offline) eliminates runtime scheduling overhead, making the approach suitable for safety‑critical applications such as aerospace control, nuclear power plant monitoring, and defense systems.
In conclusion, the paper contributes a practical framework for HIL simulation software: a model‑driven representation of stiff ODE tasks, a block‑based EDF scheduling scheme, and a tailored Branch‑and‑Bound optimizer. The work successfully balances three competing objectives—accuracy, real‑time performance, and resource utilization. Future research directions include extending the method to multi‑core processors, handling dynamic task arrivals, and applying the technique to non‑linear or adaptive models where execution times may vary at runtime.
Comments & Academic Discussion
Loading comments...
Leave a Comment