Gate-level boolean evolutionary geometric attention neural networks

Reading time: 5 minute
...

📝 Original Info

  • Title: Gate-level boolean evolutionary geometric attention neural networks
  • ArXiv ID: 2511.17550
  • Date: 2025-11-11
  • Authors: Xianshuai Shi, Jianfeng Zhu, Leibo Liu

📝 Abstract

This paper presents a gate-level Boolean evolutionary geometric attention neural network that models images as Boolean fields governed by logic gates. Each pixel is a Boolean variable (0 or 1) embedded on a two-dimensional geometric manifold (for example, a discrete toroidal lattice), which defines adjacency and information propagation among pixels. The network updates image states through a Boolean reaction-diffusion mechanism: pixels receive Boolean diffusion from neighboring pixels (diffusion process) and perform local logic updates via trainable gate-level logic kernels (reaction process), forming a reaction-diffusion logic network. A Boolean self-attention mechanism is introduced, using XNOR-based Boolean Query-Key (Q-K) attention to modulate neighborhood diffusion pathways and realize logic attention. We also propose Boolean Rotary Position Embedding (RoPE), which encodes relative distances by parity-bit flipping to simulate Boolean ``phase'' offsets. The overall structure resembles a Transformer but operates entirely in the Boolean domain. Trainable parameters include Q-K pattern bits and gate-level kernel configurations. Because outputs are discrete, continuous relaxation methods (such as sigmoid approximation or soft-logic operators) ensure differentiable training. Theoretical analysis shows that the network achieves universal expressivity, interpretability, and hardware efficiency, capable of reproducing convolutional and attention mechanisms. Applications include high-speed image processing, interpretable artificial intelligence, and digital hardware acceleration, offering promising future research directions.

💡 Deep Analysis

Figure 1

📄 Full Content

In recent years, with the growing demand for efficient and interpretable artificial intelligence, research on integrating logical reasoning into neural networks has received widespread attention. Traditional deep neural networks rely on large-scale real-valued matrix computations, which, despite their excellent performance, often have opaque inference processes and high computational costs. To improve efficiency, researchers have explored low-precision and binary neural networks, with the extreme case being Boolean logic networks: constructing neural networks using logic gates from digital circuits (such as AND, XOR, etc.). Logic gate networks cannot directly use gradient descent optimization due to their discreteness, but recent work has made them differentiable through continuous relaxation, thus enabling trainable deep logic networks. Petersen et al. [1] proposed differentiable logic gate networks that learn the logic gate type distribution at each "neuron" and discretize them into specific logic gates after training, achieving high-speed and interpretable models capable of processing over one million MNIST images per second on CPUs. Such logic networks have clear structures, are easy to extract human-readable rules from, and their discrete implementation makes inference extremely efficient. However, existing logic gate networks are mostly used for fully connected or tree structures, focusing on representing global classification decisions, and have not fully exploited the local relationships and geometric information in spatially structured data (such as images). Meanwhile, the development of graph neural networks has shown that local neighborhood propagation and attention mechanisms on image pixels or other graph-structured data can effectively extract features. For example, Graph Attention Networks [2] allow graph nodes to "attend" to features of neighboring nodes, dynamically adjusting adjacency edge weights through self-attention. Inspired by this, we wonder whether logic gate networks can be introduced on image grids while combining attention mechanisms to achieve models with both logical interpretability and flexible neighborhood modeling capabilities.

Furthermore, at a more fundamental computational paradigm level, cellular automata and reaction-diffusion systems demonstrate the possibility of producing complex global behaviors from local rules. For instance, Boolean cellular automata such as Conway’s “Game of Life” iteratively update through neighborhood Boolean rules, achieving complex spatiotemporal patterns and being proven Turing complete. Such systems can be viewed as the spatial evolution of gate-level logic. Some research has even utilized chemical reaction-diffusion media to simulate cellular automata and logic circuits, implementing Boolean computation through diffusion of neighbor states and local chemical reactions [3]. This inspires us that combining diffusion (neighborhood communication) and reaction (logic computation) mechanisms may enable the design of new parallel computing architectures.

Based on the above background, this paper proposes geometric attention neural networks based on gate-level Boolean evolution. We treat image pixels as logic units evolving on a two-dimensional discrete manifold (such as a toroidal grid). Our core ideas include: (1) defining reaction-diffusion logic networks on image grids, with local updates determined by trainable gate-level Boolean logic kernels; (2) introducing Boolean self-attention mechanisms, using XNOR matching of Boolean query-key vectors to selectively modulate neighbor information propagation, achieving attention control in the logic domain; (3) employing Boolean rotary position encoding to introduce parity-flipped displacement encoding for different neighbor distances, simulating “phase” effects of relative positions in the Boolean domain; (4) constructing a Transformer-like multi-head attention hierarchical structure, but replacing arithmetic operations with logic gates, ensuring output states are always 0/1 Boolean values, and making the network trainable through continuous approximation.

The contributions of this paper are summarized as follows:

• Gate-level Boolean geometric representation: We propose representing images as Boolean variable fields on discrete manifolds, combined with topological adjacency definitions, laying the foundation for applying logic gate networks on spatial structures.

• Reaction-diffusion logic kernels: We design trainable gate-level logic kernels that implement Boolean reactiondiffusion update mechanisms between pixels and neighborhoods, connecting the ideas of cellular automata with trainable models.

• Boolean self-attention mechanism: We invent XNOR similarity-based Boolean Q-K attention modules that regulate neighborhood information transmission strength in the Boolean domain, equivalent to adaptively selecting important neighbors for each pixel.

• Boolean RoPE position encoding: We introduce

📸 Image Gallery

page_1.png page_2.png page_3.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut