Cyber Risk Scoring with QUBO: A Quantum and Hybrid Benchmark Study
Assessing cyber risk in complex IT infrastructures poses significant challenges due to the dynamic, interconnected nature of digital systems. Traditional methods often fall short, relying on static and largely qualitative models that do not scale with system complexity and fail to capture systemic interdependencies. In this work, we introduce a novel quantitative approach to cyber risk assessment based on Quadratic Unconstrained Binary Optimization (QUBO), a formulation compatible with both classical computing and quantum annealing. We demonstrate the capabilities of our approach using a realistic 255-nodes layered infrastructure, showing how risk spreads in non-trivial patterns that are difficult to identify through visual inspection alone. To assess scalability, we further conduct extensive experiments on networks up to 1000 nodes comparing classical, quantum, and hybrid classical-quantum workflows. Our results reveal that although quantum annealing produces solutions comparable to classical heuristics, its potential advantages are significantly hindered by the embedding overhead required to map the densely connected cyber-risk QUBO onto the limited connectivity of current quantum hardware. By contrast, hybrid quantum-classical solvers avoid this bottleneck and therefore emerge as a promising option, combining competitive scaling with an improved ability to explore the solution space and identify more stable risk configurations. Overall, this work delivers two main advances. First, we present a rigorous, tunable, and generalizable mathematical model for cyber risk that can be adapted to diverse infrastructures and domains through flexible parameterization. Second, we provide the first comparative study of classical, quantum, and hybrid approaches for cyber risk scoring at scale, highlighting the emerging potential of hybrid quantum-classical methods for large-scale infrastructures.
💡 Research Summary
The paper tackles the pressing problem of quantifying cyber risk in large, interconnected IT infrastructures by formulating the assessment as a Quadratic Unconstrained Binary Optimization (QUBO) problem. Traditional risk‑scoring methods—often qualitative checklists or static quantitative scores—fail to capture the systemic interdependencies that drive cascading failures. By encoding each asset’s initial risk score, patch‑status flag, and internet‑exposure flag into binary variables, the authors construct a Hamiltonian consisting of five weighted components: (1) a term that keeps final risk scores close to their initial values, (2) a penalty that reflects the danger of highly connected nodes, (3) a neighbor‑averaging term that models local propagation, (4) a penalty for connections involving unpatched or internet‑exposed assets, and (5) a strong discouragement of risk reduction for critical assets (initial score ≥ 7). The overall objective H = ∑ λ_k H_k is minimized, yielding a systemic risk configuration that emerges from the network topology and asset‑level attributes.
To validate the model, the authors first build a synthetic yet realistic 255‑node layered network comprising workstations, network devices, servers, databases, and interleaved security layers (firewalls, IDS/IPS). Nodes receive random initial risk scores (1–4) and a single high‑risk seed (score = 8) is introduced to study propagation. The QUBO matrix generated from this topology is dense, reflecting the many pairwise interactions.
Three solution strategies are compared:
- Classical meta‑heuristic – Tabu Search, which uses a memory of recent solutions to escape local minima.
- Quantum annealing – D‑Wave 2000Q (Chimera) and newer Pegasus‑based quantum processors. Because the QUBO is densely connected, a minor‑embedding step is required to map logical variables onto the physical qubit graph.
- Hybrid quantum‑classical solver – D‑Wave’s Hybrid Solver, which performs embedding and initial search classically while delegating the most promising sub‑problems to the quantum annealer.
Experiments on the 255‑node case show that all three approaches achieve comparable energy values, but quantum annealing incurs a substantial overhead due to embedding, increasing total runtime by roughly 30 % relative to Tabu Search. The hybrid solver matches the classical runtime while delivering a modest (≈ 3 %) improvement in solution quality.
Scalability is examined on networks up to 1 000 nodes. Classical Tabu Search experiences a steep rise in runtime, making it impractical for very large instances. Pure quantum annealing frequently fails to embed the problem at this scale, leading to errors or excessive preprocessing time. In contrast, the hybrid solver successfully handles the 1 000‑node QUBO, achieving about a 5 % lower energy than the classical baseline and exhibiting near‑linear scaling in execution time. This demonstrates that the hybrid workflow effectively bypasses the embedding bottleneck that limits pure quantum approaches.
An additional “recursive QUBO minimization” experiment is introduced to probe solution stability. By repeatedly feeding the output of a solver back as the new initial condition, the authors observe that stable configurations reproduce the same risk profile across iterations, whereas unstable runs show increasing risk scores, indicating convergence to shallow local minima. This diagnostic offers practical insight into the reliability of the obtained risk assessments.
Key insights from the study include:
- Embedding overhead is the dominant limitation for current quantum annealers when tackling dense, large‑scale QUBO formulations typical of cyber‑risk problems.
- Hybrid quantum‑classical methods provide the best trade‑off between solution quality, runtime, and scalability, making them the most promising avenue for near‑term deployment.
- The QUBO framework is highly modular; by adjusting the λ‑weights or adding new Hamiltonian terms, practitioners can tailor the model to different industries, incorporate additional risk factors (e.g., supply‑chain dependencies), or reflect evolving threat landscapes.
- Recursive minimization serves as a useful stability metric, helping organizations decide whether a given risk assessment is robust enough for operational decision‑making.
In conclusion, the paper delivers a rigorous, tunable mathematical model for cyber risk that captures both asset‑level vulnerabilities and network‑level propagation dynamics. It also provides the first systematic benchmark of classical, quantum, and hybrid solvers on realistic IT topologies up to 1 000 nodes. While pure quantum annealing currently suffers from hardware connectivity constraints, hybrid quantum‑classical workflows emerge as a viable path forward, offering improved solution quality and scalability. Future work should focus on next‑generation quantum hardware with richer connectivity, dynamic QUBO updates for real‑time monitoring, and integration with existing security information and event management (SIEM) platforms to translate the theoretical risk scores into actionable defense strategies.
Comments & Academic Discussion
Loading comments...
Leave a Comment