그래프 지배수 예측을 위한 그래프 신경망의 우수성

Reading time: 5 minute
...

📝 Abstract

We investigate machine learning approaches to approximating the \emph{domination number} of graphs, the minimum size of a dominating set. Exact computation of this parameter is NP-hard, restricting classical methods to small instances. We compare two neural paradigms: Convolutional Neural Networks (CNNs), which operate on adjacency matrix representations, and Graph Neural Networks (GNNs), which learn directly from graph structure through message passing. Across 2,000 random graphs with up to 64 vertices, GNNs achieve markedly higher accuracy ( $R^2=0.987 $, MAE $=0.372 $) than CNNs ( $R^2=0.955 $, MAE $=0.500 $). Both models offer substantial speedups over exact solvers, with GNNs delivering more than $200\times$ acceleration while retaining near-perfect fidelity. Our results position GNNs as a practical surrogate for combinatorial graph invariants, with implications for scalable graph optimization and mathematical discovery.

💡 Analysis

We investigate machine learning approaches to approximating the \emph{domination number} of graphs, the minimum size of a dominating set. Exact computation of this parameter is NP-hard, restricting classical methods to small instances. We compare two neural paradigms: Convolutional Neural Networks (CNNs), which operate on adjacency matrix representations, and Graph Neural Networks (GNNs), which learn directly from graph structure through message passing. Across 2,000 random graphs with up to 64 vertices, GNNs achieve markedly higher accuracy ( $R^2=0.987 $, MAE $=0.372 $) than CNNs ( $R^2=0.955 $, MAE $=0.500 $). Both models offer substantial speedups over exact solvers, with GNNs delivering more than $200\times$ acceleration while retaining near-perfect fidelity. Our results position GNNs as a practical surrogate for combinatorial graph invariants, with implications for scalable graph optimization and mathematical discovery.

📄 Content

Graphs provide a unifying mathematical framework for modeling complex systems, from communication and transportation networks to molecular structures and biological interactions. Among the many invariants studied in graph theory, the domination number γ(G) plays a central role in problems of resource allocation, coverage, and network security. A dominating set is a subset of vertices D such that every vertex outside D has a neighbor in D; the domination number is the minimum size of such a set. Foundational work in the 1970s established the basis of domination theory Cockayne and Hedetniemi [1977], Allan and Laskar [1978], with comprehensive treatments consolidating its importance in structural graph theory Haynes et al. [1998]. However, computing γ(G) is NP-complete Garey and Johnson [1979], and even approximation guarantees are constrained by hardness-of-approximation results Feige [1998]. Classical heuristics Parekh [1991] and refined exact algorithms Fomin et al. [2009] remain restricted to small instances, underscoring the need for scalable alternatives.

One emerging approach is to replace algorithmic computation with prediction. Rather than solving each instance of domination from scratch, a machine learning model can be trained to map graph structure directly to γ(G). The idea of “learning to optimize” has gained traction across combinatorial domains Khalil et al. [2017], with recent work demonstrating that neural models can approximate graph invariants such as stability numbers Davila [2024]. Domain-specific tools such as GraphCalc Davila [2025] now enable systematic experimentation with these surrogates, opening the door to large-scale studies that would be infeasible with classical solvers alone.

At the architectural level, the natural candidates are Graph Neural Networks (GNNs). Originating from early formulations of graph-based recurrent networks Scarselli et al. [2009], the field has rapidly advanced through gated updates Li et al. [2016], graph convolutions Kipf and Welling [2017], inductive methods Hamilton et al. [2017], and general message passing Gilmer et al. [2017]. Of particular relevance is the Graph Isomorphism Network (GIN) Xu et al. [2019], which matches the expressive power of the Weisfeiler-Lehman test Morris et al. [2019] and has become a standard choice for learning structural graph properties.

As a contrasting paradigm, Convolutional Neural Networks (CNNs) have been adapted to graphs by treating adjacency matrices as images. This approach leverages convolutional filters to capture local connectivity Tixier et al. [2018] and, in certain settings, has even rivaled or outperformed graph-based models Boronina et al. [2023]. Yet CNNs discard the permutation invariance intrinsic to graphs, raising questions about their suitability for learning combinatorial invariants that depend on global graph structure.

Against this backdrop, we present a comparative study of CNNs and GNNs for predicting the domination number. By situating CNNs as a vision-inspired baseline and GNNs as a graph-native model, we quantify how architectural inductive biases translate into predictive accuracy, runtime efficiency, and robustness across graph sizes. To our knowledge, this is the first systematic study of domination number prediction within this comparative framework, and our findings highlight the promise of GNNs as scalable surrogates for hard graph invariants.

The study of domination in graphs has a long history. Foundational papers established key structural results Cockayne and Hedetniemi [1977], Allan and Laskar [1978], and comprehensive surveys Haynes et al. [1998] underline its centrality in graph theory and applications. From a computational standpoint, exact algorithms remain limited due to NP-completeness Garey and Johnson [1979] and hardness of approximation Feige [1998], though specialized heuristics Parekh [1991] and exponential-time algorithms Fomin et al. [2009] have been developed. These barriers motivate surrogate approaches capable of scaling beyond traditional methods.

Graph Neural Networks have emerged as a powerful paradigm for learning on graph-structured data. Early formulations of graph-based neural computation Scarselli et al. [2009], Li et al. [2016] evolved into modern message-passing architectures Kipf and Welling [2017], Hamilton et al. [2017], Gilmer et al. [2017], with the Graph Isomorphism Network (GIN) providing near-optimal expressive power Xu et al. [2019], Morris et al. [2019]. GNNs have been successfully applied across domains, from chemistry to combinatorial optimization Khalil et al. [2017], and have shown promise in approximating graph invariants Davila [2024]. Tools like GraphCalc Davila [2025] now provide systematic infrastructure for these investigations.

Convolutional Neural Networks, while originally developed for vision, have been adapted to graphs by operating on adjacency matrices. This line of work exploits convolutional locality Tixier et al. [2018

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut