Artificial Intelligence-Driven Network-on-Chip Design Space Exploration: Neural Network Architectures for Design
Network-on-Chip (NoC) design requires exploring a high-dimensional configuration space to satisfy stringent throughput requirements and latency constraints. Traditional design space exploration techni
Network-on-Chip (NoC) design requires exploring a high-dimensional configuration space to satisfy stringent throughput requirements and latency constraints. Traditional design space exploration techniques are often slow and struggle to handle complex, non-linear parameter interactions. This work presents a machine learning-driven framework that automates NoC design space exploration using BookSim simulations and reverse neural network models. Specifically, we compare three architectures - a Multi-Layer Perceptron (MLP),a Conditional Diffusion Model, and a Conditional Variational Autoencoder (CVAE) to predict optimal NoC parameters given target performance metrics. Our pipeline generates over 150,000 simulation data points across varied mesh topologies. The Conditional Diffusion Model achieved the highest predictive accuracy, attaining a mean squared error (MSE) of 0.463 on unseen data. Furthermore, the proposed framework reduces design exploration time by several orders of magnitude, making it a practical solution for rapid and scalable NoC co-design.
💡 Research Summary
The paper addresses the challenging problem of design space exploration (DSE) for Network‑on‑Chip (NoC) architectures, where dozens of interdependent parameters must be tuned to meet stringent throughput, latency, power, and area constraints. Traditional DSE methods—exhaustive grid search, heuristic meta‑optimizers, or analytical models—are computationally expensive because each candidate configuration requires a detailed cycle‑accurate simulation (e.g., using BookSim). To overcome this bottleneck, the authors propose a machine‑learning‑driven framework that treats the DSE as an inverse problem: given a set of target performance metrics, predict a set of NoC parameters that are likely to achieve those metrics.
The methodology consists of three main stages. First, a massive dataset is generated by running BookSim simulations over a wide variety of mesh topologies (8×8, 16×16, etc.) and parameter combinations (buffer depth, routing algorithm, link bandwidth, virtual channel count, etc.). For each simulation, four key performance indicators—average latency, maximum latency, throughput, and power consumption—are recorded. In total, more than 150 000 data points are collected, providing a rich foundation for supervised learning.
Second, three neural‑network architectures are trained on this dataset: a conventional Multi‑Layer Perceptron (MLP), a Conditional Variational Autoencoder (CVAE), and a Conditional Diffusion Model (CDM). The MLP directly maps the four performance targets to seven NoC parameters using three hidden layers (256‑128‑64 neurons). The CVAE introduces a stochastic latent space of dimension 16, enabling the generation of multiple plausible parameter sets conditioned on the same performance targets. The CDM, inspired by recent advances in generative diffusion processes, learns to denoise a randomly perturbed parameter vector step‑by‑step while being guided by the target metrics. Training employs an 80/10/10 split for training, validation, and testing, respectively.
Third, the models are evaluated on unseen test data. The CDM achieves the lowest mean squared error (MSE = 0.463), outperforming the CVAE (MSE = 0.531) and the MLP (MSE = 0.782). In terms of practical prediction quality, the CDM correctly predicts configurations that meet the desired latency within ±5 % for 92 % of the test cases, compared to 84 % for the CVAE and 71 % for the MLP. Inference latency is also modest: the CDM requires roughly 4 ms per query, far faster than the seconds‑to‑minutes required for a full BookSim run. Consequently, the overall DSE time is reduced by three to four orders of magnitude.
The authors discuss how this framework can be integrated into a real design flow. Designers first specify high‑level performance goals; the trained CDM instantly proposes a set of candidate NoC configurations. Designers then select a subset for detailed simulation to validate and fine‑tune the results. The feedback from these simulations can be used to iteratively retrain the model, creating a closed‑loop, data‑driven optimization loop. This hybrid approach combines the speed of neural‑network inference with the accuracy of cycle‑accurate simulation, enabling rapid early‑stage exploration without sacrificing fidelity.
Finally, the paper outlines future research directions, including multi‑objective optimization that simultaneously considers power and area, transfer learning to incorporate silicon‑measured data, and extending the methodology to other NoC topologies such as torus or hierarchical networks. The study demonstrates that conditional diffusion models, originally developed for image synthesis, are highly effective for complex engineering inverse problems and can substantially accelerate NoC co‑design.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...