Graph Neural Network-Based Predictor for Optimal Quantum Hardware Selection
The growing variety of quantum hardware technologies, each with unique peculiarities such as connectivity and native gate sets, creates challenges when selecting the best platform for executing a specific quantum circuit. This selection process usually involves a brute-force approach: compiling the circuit on various devices and evaluating performance based on factors such as circuit depth and gate fidelity. However, this method is computationally expensive and does not scale well as the number of available quantum processors increases. In this work, we propose a Graph Neural Network (GNN)-based predictor that automates hardware selection by analyzing the Directed Acyclic Graph (DAG) representation of a quantum circuit. Our study evaluates 498 quantum circuits (up to 27 qubits) from the MQT Bench dataset, compiled using Qiskit on four devices: three superconducting quantum processors (IBM-Kyiv, IBM-Brisbane, IBM-Sherbrooke) and one trapped-ion processor (IONQ-Forte). Performance is estimated using a metric that integrates circuit depth and gate fidelity, resulting in a dataset where 93 circuits are optimally compiled on the trapped-ion device, while the remaining circuits prefer superconducting platforms. By exploiting graph-based machine learning, our approach avoids extracting the circuit features for the model evaluation but directly embeds it as a graph, significantly accelerating the optimal target decision-making process and maintaining all the information. Experimental results prove 94.4% accuracy and an 85.5% F1 score for the minority class, effectively predicting the best compilation target. The developed code is publicly available on GitHub (https://github.com/antotu/GNN-Model-Quantum-Predictor).
💡 Research Summary
The paper addresses the growing challenge of selecting the most suitable quantum processor for a given quantum circuit as the quantum hardware ecosystem diversifies. Traditional practice involves compiling the same circuit on multiple devices and comparing performance metrics such as circuit depth and gate fidelity—a brute‑force approach that quickly becomes computationally prohibitive as the number of available processors rises.
To overcome this bottleneck, the authors propose a machine‑learning solution based on Graph Neural Networks (GNNs). The key insight is to treat each quantum circuit as a Directed Acyclic Graph (DAG), where nodes represent quantum operations (gates) and edges encode the temporal ordering of those operations. Every node is described by a 66‑dimensional binary feature vector: a 36‑bit one‑hot encoding of the gate type, 27 bits indicating the target qubit (supporting circuits up to 27 qubits), and 3 bits for any rotation angle parameters. By feeding the raw DAG directly into a GNN, the method avoids any manual feature engineering and preserves the full structural information of the circuit.
The dataset consists of 498 circuits drawn from the MQT Bench benchmark suite, compiled with Qiskit for four real devices: three superconducting IBM processors (Kyiv, Brisbane, Sherbrooke) and one trapped‑ion processor (IONQ‑Forte). For each compilation the authors extract the circuit depth (D) and the fidelity of each gate (Fi). They define a cost function
\
Comments & Academic Discussion
Loading comments...
Leave a Comment