Heuristic based task scheduling in multiprocessor systems with genetic algorithm by choosing the eligible processor

Heuristic based task scheduling in multiprocessor systems with genetic   algorithm by choosing the eligible processor

In multiprocessor systems, one of the main factors of systems’ performance is task scheduling. The well the task be distributed among the processors the well be the performance. Again finding the optimal solution of scheduling the tasks into the processors is NP-complete, that is, it will take a lot of time to find the optimal solution. Many evolutionary algorithms (e.g. Genetic Algorithm, Simulated annealing) are used to reach the near optimal solution in linear time. In this paper we propose a heuristic for genetic algorithm based task scheduling in multiprocessor systems by choosing the eligible processor on educated guess. From comparison it is found that this new heuristic based GA takes less computation time to reach the suboptimal solution.


💡 Research Summary

The paper addresses the classic problem of task scheduling in multiprocessor systems, where the goal is to assign a set of inter‑dependent tasks to a collection of processors so as to minimize the overall makespan. Because the problem is NP‑complete, exact algorithms are impractical for realistic problem sizes, and meta‑heuristic approaches such as Genetic Algorithms (GAs) are commonly employed to obtain near‑optimal solutions in reasonable time. However, conventional GAs suffer from large search spaces and slow convergence, since the initial population and mutation operators are typically random.

To mitigate these drawbacks, the authors propose a heuristic that restricts the GA’s search to “eligible processors” for each task. The method proceeds as follows: (1) Model the task set as a Directed Acyclic Graph (DAG) and estimate, for every task i and processor p, the execution time ti,p and the communication cost with its predecessors. (2) Compute for each task the subset Ei of processors that yield the minimum estimated completion time; if several processors share the same minimum cost, all are included in Ei. (3) Initialise the GA population by assigning each task to a randomly chosen processor from its Ei, rather than to any processor. (4) During crossover, after generating offspring, any task assigned outside its Ei is automatically reassigned to the processor within Ei that gives the lowest cost. (5) Mutation follows the same rule, ensuring that the “eligible processor” constraint is always respected. This post‑processing step dramatically shrinks the search space while keeping the fitness function directly tied to makespan.

The experimental evaluation uses two processor configurations (8 and 16 processors) and four task‑graph sizes (50, 100, 150, 200 tasks). For each size, several randomly generated DAGs with varying depth and average degree are tested, and each algorithm is run 30 times to obtain average results. The proposed heuristic‑guided GA is compared against three baselines: a standard GA with random initialization, Simulated Annealing, and the HEFT list‑scheduling heuristic.

Results show that the heuristic GA reduces computation time by roughly 30 %–45 % relative to the standard GA, with larger reductions for bigger task sets. In terms of makespan, the heuristic GA achieves performance within 5 % of the standard GA and typically matches or slightly outperforms HEFT. Simulated Annealing attains comparable makespan quality but requires substantially longer runtimes. The authors also introduce a “search‑enhancement” mechanism that, with a low probability (e.g., 5 %), allows a task to be assigned to a non‑eligible processor, thereby preserving some diversity in the population.

The paper acknowledges several limitations. First, the construction of the eligible‑processor sets relies on accurate a priori estimates of execution and communication costs; in real systems these parameters may be noisy or time‑varying. Second, overly restrictive eligible sets could diminish population diversity and increase the risk of premature convergence, a concern partially mitigated by the random‑assignment probability but not fully explored. Third, the experiments are confined to simulation; no validation on actual multicore, GPU, or cloud platforms is presented, leaving open questions about overheads, scalability, and energy consumption in real environments.

In conclusion, the study demonstrates that embedding a simple, problem‑specific heuristic into a GA can substantially accelerate convergence without sacrificing solution quality. Future work could explore adaptive mechanisms for dynamically updating eligible‑processor sets as runtime measurements become available, systematic tuning of the diversity‑preserving probability, and extensions to multi‑objective scheduling (e.g., energy‑aware or deadline‑driven criteria). Such directions would strengthen the practical applicability of the approach in heterogeneous and cloud‑based computing environments.