Exact Verification of Graph Neural Networks with Incremental Constraint Solving

Exact Verification of Graph Neural Networks with Incremental Constraint Solving
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Graph neural networks (GNNs) are increasingly employed in high-stakes applications, such as fraud detection or healthcare, but are susceptible to adversarial attacks. A number of techniques have been proposed to provide adversarial robustness guarantees, but support for commonly used aggregation functions in message-passing GNNs is lacking. In this paper, we develop an exact (sound and complete) verification method for GNNs to compute guarantees against attribute and structural perturbations that involve edge addition or deletion, subject to budget constraints. Our method employs constraint solving with bound tightening, and iteratively solves a sequence of relaxed constraint satisfaction problems while relying on incremental solving capabilities of solvers to improve efficiency. We implement GNNev, a versatile exact verifier for message-passing neural networks, which supports three aggregation functions, sum, max and mean, with the latter two considered here for the first time. Extensive experimental evaluation of GNNev on real-world fraud datasets (Amazon and Yelp) and biochemical datasets (MUTAG and ENZYMES) demonstrates its usability and effectiveness, as well as superior performance for node classification and competitiveness on graph classification compared to existing exact verification tools on sum-aggregated GNNs.


💡 Research Summary

The paper introduces GNNev, an exact (sound and complete) verification framework for message‑passing graph neural networks (GNNs) that supports the three most common aggregation functions—sum, max, and mean—and handles both attribute and structural perturbations under global and local budget constraints. Existing exact verifiers are limited to sum aggregation and, in the case of structural attacks, to edge deletion only. By extending the constraint‑satisfaction problem (CSP) encoding to max and mean, the authors overcome the non‑linearities inherent in these aggregations: mean is linearised with a big‑M formulation, while max is modelled using additional binary selector variables and ordering constraints. Both encodings are paired with specialised bound‑tightening strategies that shrink variable domains before solving, dramatically reducing the search space.

Structural perturbations are modelled with a flexible edge set F; any edge in F may be deleted or added, subject to a global budget Δ and per‑node budgets δ_v. Attribute perturbations are expressed as real‑valued variables bounded by per‑feature lower and upper limits ε^l_{v,i}, ε^u_{v,i}. The admissible perturbation space Q(G) thus captures a realistic attacker model that can simultaneously modify node features and the graph topology.

A key contribution is the incremental solving algorithm. Rather than encoding the entire K‑layer GNN at once, the verifier proceeds layer‑by‑layer. After each layer’s constraints are added, bound‑tightening propagates new lower and upper bounds on the hidden representations; these bounds are fed forward to the next layer, allowing the underlying SAT/SMT or MIP solver to reuse previously solved sub‑problems. This incremental approach leverages modern solvers’ ability to add constraints without restarting the search, yielding substantial speed‑ups especially for deeper networks.

The authors evaluate GNNev on a diverse set of benchmarks: node‑classification datasets (Cora, CiteSeer), real‑world fraud detection graphs (Amazon, Yelp), and biochemical graph‑classification datasets (MUTAG, ENZYMES). Compared against SCIP‑MPNN, the only known exact verifier for sum‑aggregated GNNs, GNNev achieves 2–5× faster verification on sum aggregation while also supporting max and mean aggregations, for which no exact tool previously existed. Experiments demonstrate that bound‑tightening reduces variable domains by 30–45 % for max/mean cases, and that the incremental strategy cuts total solving time by up to 60 % for five‑layer networks. The framework successfully certifies robustness against combined attribute and structural attacks, providing worst‑case margin guarantees for the target node or graph class.

In summary, the paper fills two major gaps in GNN verification: (1) exact handling of non‑linear aggregations (max, mean) and (2) support for edge addition alongside deletion. By integrating these capabilities with an efficient incremental CSP encoding and aggressive bound propagation, GNNev delivers a practical, scalable tool for certifying the adversarial robustness of GNNs in high‑stakes applications such as financial fraud detection, healthcare, and cybersecurity.


Comments & Academic Discussion

Loading comments...

Leave a Comment