초고속 벡터심볼릭 그래프 학습으로 GNN 성능에 도전

Reading time: 6 minute
...

📝 Abstract

Graph classification is a fundamental task in domains ranging from molecular property prediction to materials design. While graph neural networks (GNNs) achieve strong performance by learning expressive representations via message passing, they incur high computational costs, limiting their scalability and deployment on resource-constrained devices. Hyperdimensional Computing (HDC), also known as Vector Symbolic Architectures (VSA), offers a lightweight, brain-inspired alternative, yet existing HDC-based graph methods typically struggle to match the predictive performance of GNNs. In this work, we propose VS-Graph, a vector-symbolic graph learning framework that narrows the gap between the efficiency of HDC and the expressive power of message passing. VS-Graph introduces a Spike Diffusion mechanism for topology-driven node identification and an Associative Message Passing scheme for multi-hop neighborhood aggregation entirely within the highdimensional vector space. Without gradient-based optimization or backpropagation, our method achieves competitive accuracy with modern GNNs, outperforming the prior HDC baseline by 4-5% on standard benchmarks such as MUTAG and DD. It also matches or exceeds the performance of the GNN baselines on several datasets while accelerating the training by a factor of up to 450×. Furthermore, VS-Graph maintains high accuracy even with the hypervector dimensionality reduced to D = 128, demonstrating robustness under aggressive dimension compression and paving the way for ultra-efficient execution on edge and neuromorphic hardware.

💡 Analysis

Graph classification is a fundamental task in domains ranging from molecular property prediction to materials design. While graph neural networks (GNNs) achieve strong performance by learning expressive representations via message passing, they incur high computational costs, limiting their scalability and deployment on resource-constrained devices. Hyperdimensional Computing (HDC), also known as Vector Symbolic Architectures (VSA), offers a lightweight, brain-inspired alternative, yet existing HDC-based graph methods typically struggle to match the predictive performance of GNNs. In this work, we propose VS-Graph, a vector-symbolic graph learning framework that narrows the gap between the efficiency of HDC and the expressive power of message passing. VS-Graph introduces a Spike Diffusion mechanism for topology-driven node identification and an Associative Message Passing scheme for multi-hop neighborhood aggregation entirely within the highdimensional vector space. Without gradient-based optimization or backpropagation, our method achieves competitive accuracy with modern GNNs, outperforming the prior HDC baseline by 4-5% on standard benchmarks such as MUTAG and DD. It also matches or exceeds the performance of the GNN baselines on several datasets while accelerating the training by a factor of up to 450×. Furthermore, VS-Graph maintains high accuracy even with the hypervector dimensionality reduced to D = 128, demonstrating robustness under aggressive dimension compression and paving the way for ultra-efficient execution on edge and neuromorphic hardware.

📄 Content

VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing Hamed Poursiami∗, Shay Snyder∗, Guojing Cong†, Thomas Potok†, Maryam Parsa∗‡ ∗Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA †Oak Ridge National Laboratory, Oak Ridge, TN, USA ‡Corresponding Author Abstract—Graph classification is a fundamental task in do- mains ranging from molecular property prediction to materials design. While graph neural networks (GNNs) achieve strong performance by learning expressive representations via message passing, they incur high computational costs, limiting their scalability and deployment on resource-constrained devices. Hy- perdimensional Computing (HDC), also known as Vector Sym- bolic Architectures (VSA), offers a lightweight, brain-inspired alternative, yet existing HDC-based graph methods typically struggle to match the predictive performance of GNNs. In this work, we propose VS-Graph, a vector-symbolic graph learning framework that narrows the gap between the efficiency of HDC and the expressive power of message passing. VS-Graph introduces a Spike Diffusion mechanism for topology-driven node identification and an Associative Message Passing scheme for multi-hop neighborhood aggregation entirely within the high- dimensional vector space. Without gradient-based optimization or backpropagation, our method achieves competitive accuracy with modern GNNs, outperforming the prior HDC baseline by 4-5% on standard benchmarks such as MUTAG and DD. It also matches or exceeds the performance of the GNN baselines on several datasets while accelerating the training by a factor of up to 450×. Furthermore, VS-Graph maintains high accuracy even with the hypervector dimensionality reduced to D = 128, demonstrating robustness under aggressive dimension compres- sion and paving the way for ultra-efficient execution on edge and neuromorphic hardware. Index Terms—Hyperdimensional Computing, Graph Neural Networks, Vector Symbolic Architectures, Graph Classification I. INTRODUCTION Graph-structured data arise in a wide range of scientific and industrial domains, including molecular chemistry [1], social and information networks [2], [3], recommendation systems [4], and materials science [5]. In these settings, entities and their relationships are naturally modeled as nodes and edges in a graph that resides in a non-Euclidean domain, where the notion of locality is determined by connectivity rather than spatial proximity. Graph learning is commonly organized into three canonical task families: node-level tasks, such as semi-supervised (transductive) classification in cita- tion networks, which infer labels for individual nodes given only small labeled subsets [6]; edge-level tasks, including link prediction in recommender systems [7], which estimate the presence or type of relations between pairs of nodes; and graph-level tasks, which map an entire graph (e.g., a molecule or crystalline material) to a label that reflects its functional or physical properties [8]. Across these settings, a central challenge is to develop representations that capture expressive structural information while enabling scalable and computationally efficient learning [9]. Graph Neural Networks (GNNs) have become a widely used framework for such problems [10]. They are typically formulated as message passing architectures in which nodes iteratively aggregate and transform information from their neighbors through a sequence of layers. Building on the Message Passing Neural Network (MPNN) [11] formulation, subsequent models such as Graph Convolutional Networks (GCN) [12], Graph Attention Networks (GAT) [13], Graph Isomorphism Networks (GIN) [14], and Principal Neigh- borhood Aggregation (PNA) [15], instantiate this paradigm through different choices of aggregation functions, normal- ization schemes, and update mechanisms. While these models achieve strong empirical performance, they face practical chal- lenges related to scalability and efficiency [16]. Training these GNNs requires backpropagation over sparse and irregular graph structures, which leads to high memory consumption and inefficient use of parallel hardware [16], [17]. In addition, deeper architectures are prone to over-smoothing and incur higher computational cost that limits deployment on resource- constrained devices [18], [19]. Hyperdimensional Computing (HDC), also known as Vec- tor Symbolic Architectures (VSA) [20], is a brain-inspired computational framework that encodes information using high- dimensional distributed hypervectors [21]. HDC operates via a set of algebraic operations, including binding, bundling, and permutation, which enable the construction of complex symbolic structures from atomic representations and support robust, noise-tolerant, and highly parallel computation [22]. The efficiency and representational flexibility of HDC have motivated recent efforts to extend it to graph learning. At the node leve

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut