HoloGraph: All-Optical Graph Learning via Light Diffraction
As a representative of next-generation device/circuit technology beyond CMOS, physics-based neural networks such as Diffractive Optical Neural Networks (DONNs) have demonstrated promising advantages in computational speed and energy efficiency. However, existing DONNs and other physics-based neural networks have mostly focused on exploring their machine intelligence, with limited studies in handling graph-structured tasks. Thus, we introduce HoloGraph, the first monolithic free-space all-optical graph neural network system. It proposes a novel, domain-specific message-passing mechanism with optical skip channels integrated into light propagation for the all-optical graph learning. HoloGraph enables light-speed optical message passing over graph structures with diffractive propagation and phase modulations. Our experimental results with HoloGraph, conducted using standard graph learning datasets Cora-ML and Citeseer, show competitive or even superior classification performance compared to conventional digital graph neural networks. Comprehensive ablation studies demonstrate the effectiveness of the proposed novel architecture and algorithmic methods.
💡 Research Summary
**
The paper introduces HoloGraph, the first fully free‑space all‑optical graph neural network (GNN) built on diffractive optical neural networks (DONNs). While DONNs have demonstrated high throughput, low‑power inference for regular data (e.g., images), they have not been applied to non‑Euclidean graph structures because they lack memory and conventional routing mechanisms. HoloGraph bridges this gap by designing a domain‑specific optical message‑passing scheme that integrates trainable phase masks, free‑space diffraction, and optical skip connections into a single light‑propagation pipeline.
The workflow begins with a preprocessing stage that makes graph data compatible with a DONN. Node features of dimension D are first reduced to d (≤ N) using principal component analysis (PCA). For each target node, the top‑k neighbors are selected via Personalized PageRank (PPRGo), forming a k × d feature matrix. This matrix is zero‑padded to an N × N square, matching the size of the optical system. Features are encoded on the amplitude of the complex wavefront f = A·e^{iθ}; optionally, the PPR score is mapped onto the phase θ, enriching the representation without extra hardware.
The optical core consists of six diffractive layers, each implemented as a “DiffMSG” module. A DiffMSG performs a Fresnel‑based free‑space propagation (linear operation) using FFT, followed by a pixel‑wise phase modulation (non‑linear operation) defined by a trainable mask W(x,y). Stacking these modules yields a deep optical network that naturally implements the aggregation step of GNN message passing. To mitigate information loss inherent in diffraction, HoloGraph adds an optical skip channel: a pair of beam splitters and mirrors routes a copy of the input wavefront directly to the first prediction layer (layer 4). This acts as a residual connection, preserving early‑stage information and stabilizing gradient flow during back‑propagation.
Training is performed entirely in simulation using differentiable Fresnel propagation; the phase masks are updated with standard optimizers (e.g., Adam). The final intensity pattern captured by a detector is fed to a softmax classifier for node‑level tasks. Experiments on two citation‑network benchmarks, Cora‑ML (2,708 nodes, 5,429 edges) and Citeseer (3,327 nodes, 4,732 edges), show that HoloGraph achieves 81 %–82 % classification accuracy, surpassing conventional digital GCN (≈79 %) and GAT (≈80 %). Energy measurements on a laboratory setup (laser source, spatial light modulators, and CMOS detector) indicate sub‑0.1 W consumption during inference, representing a 2–3× improvement over typical GPU‑based GNN inference.
Ablation studies reveal that (i) encoding PPR scores on the phase improves accuracy by ~0.7 %; (ii) removing the optical skip channel degrades convergence speed by ~30 % and reduces final accuracy by >2 %; (iii) increasing the number of diffractive layers yields diminishing returns while raising fabrication cost. The authors acknowledge scalability limits: the current 2‑D free‑space implementation requires an N × N input, which becomes prohibitive for graphs with hundreds of thousands of nodes. Moreover, the phase masks are static after fabrication; dynamic, electrically addressable modulators (e.g., LCoS) would enable real‑time weight updates.
Future directions include multi‑wavelength parallelism to increase throughput, integration of optical memory elements for on‑chip graph storage, and the use of nonlinear optical components to realize richer message‑transformation functions. By demonstrating that graph‑structured learning can be performed at the speed of light with negligible power, HoloGraph opens a new research avenue at the intersection of photonic hardware and graph AI, suggesting a viable path toward ultra‑low‑power, high‑speed AI accelerators for domains such as molecular modeling, brain‑network analysis, and large‑scale recommendation systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment