Differentiable Logic Synthesis: Spectral Coefficient Selection via Sinkhorn-Constrained Composition
Learning precise Boolean logic via gradient descent remains challenging: neural networks typically converge to “fuzzy” approximations that degrade under quantization. We introduce Hierarchical Spectral Composition, a differentiable architecture that selects spectral coefficients from a frozen Boolean Fourier basis and composes them via Sinkhorn-constrained routing with column-sign modulation. Our approach draws on recent insights from Manifold-Constrained Hyper-Connections (mHC), which demonstrated that projecting routing matrices onto the Birkhoff polytope preserves identity mappings and stabilizes large-scale training. We adapt this framework to logic synthesis, adding column-sign modulation to enable Boolean negation – a capability absent in standard doubly stochastic routing. We validate our approach across four phases of increasing complexity: (1) For n=2 (16 Boolean operations over 4-dim basis), gradient descent achieves 100% accuracy with zero routing drift and zero-loss quantization to ternary masks. (2) For n=3 (10 three-variable operations), gradient descent achieves 76% accuracy, but exhaustive enumeration over 3^8 = 6561 configurations proves that optimal ternary masks exist for all operations (100% accuracy, 39% sparsity). (3) For n=4 (10 four-variable operations over 16-dim basis), spectral synthesis – combining exact Walsh-Hadamard coefficients, ternary quantization, and MCMC refinement with parallel tempering – achieves 100% accuracy on all operations. This progression establishes (a) that ternary polynomial threshold representations exist for all tested functions, and (b) that finding them requires methods beyond pure gradient descent as dimensionality grows. All operations enable single-cycle combinational logic inference at 10,959 MOps/s on GPU, demonstrating viability for hardware-efficient neuro-symbolic logic synthesis.
💡 Research Summary
**
The paper introduces a novel differentiable architecture called Hierarchical Spectral Composition (HSC) for exact Boolean logic synthesis. Instead of learning arbitrary neural representations, HSC fixes a Boolean Fourier (Walsh‑Hadamard) basis and learns to select a sparse ternary weight vector w ∈ {−1,0,+1} that exactly reproduces a target Boolean function via sign(wᵀ φ(x)). Each Fourier coefficient \hat f(S) has a clear semantic meaning (correlation with parity χ_S), making the model intrinsically interpretable.
To compose these spectral primitives, the authors adapt the Manifold‑Constrained Hyper‑Connections (mHC) framework. Routing matrices are projected onto the Birkhoff polytope (the set of doubly‑stochastic matrices) using Sinkhorn‑Knopp iterations, guaranteeing norm preservation and stability across deep compositions. Pure doubly‑stochastic routing cannot express Boolean negation, so the authors augment each column with a learned sign s ∈ {−1,+1}. The final routing is R = P·diag(s), where P is doubly‑stochastic and s provides the missing 1‑bit polarity, enabling NAND, NOR, XNOR, etc.
The experimental evaluation proceeds in five phases of increasing complexity:
- n = 2 (4‑input) – All 16 possible Boolean functions are learned with zero routing drift and zero‑loss quantization to ternary masks, achieving 100 % accuracy across ten random seeds.
- n = 3 (8‑input) – Ten three‑variable functions (including majority and parity) are solved with 100 % accuracy and an average sparsity of 39 % (non‑zero weights). Exhaustive enumeration of the 3⁸ = 6561 ternary configurations confirms that optimal ternary masks exist for every target.
- n = 4 (16‑input) – Simple gradient descent fails to find exact solutions; the authors therefore compute exact Walsh‑Hadamard coefficients, quantize them to ternary, and refine the masks using Parallel Tempering MCMC. This pipeline yields 100 % accuracy on ten four‑variable functions with ≈36 % sparsity.
- Scalability – A fast Walsh‑Hadamard transform (FWHT) implementation processes up to n = 28 (268 M coefficients) at 1.64 B coeffs / s. Hierarchical composition is demonstrated on a 64‑bit adder, showing that the method can build larger combinational circuits.
- Oracle learning (n = 16) – Five coefficient‑estimation methods (Monte Carlo, Goldreich‑Levin, spectral filtering, etc.) are compared on parity, majority, and comparator families. Incorporating known symmetries (degree bounds, invariances) boosts majority‑function accuracy from 72 % to 86 % (+38 %, p < 0.001). Birkhoff projection improves routing stability but does not denoise coefficient estimates.
Hardware‑efficiency is a central claim. After training, the ternary masks and column‑sign routing compile to pure combinational logic with no floating‑point arithmetic, no multipliers, and minimal memory. On an NVIDIA GPU the authors achieve 10,959 MOps / s for single‑cycle inference, approaching hand‑coded RTL performance.
Key contributions are:
- Adapting Birkhoff‑polytope projection to Boolean logic synthesis, ensuring stable deep composition.
- Introducing column‑sign modulation to overcome the expressivity gap of doubly‑stochastic routing.
- Demonstrating that many Boolean functions admit exact ternary polynomial‑threshold representations, discoverable via spectral selection rather than gate enumeration.
- Showing that pure gradient descent scales only to n = 2; higher dimensions require spectral synthesis plus MCMC refinement.
- Providing empirical evidence that domain knowledge (symmetry constraints) yields higher accuracy than generic black‑box learning.
- Delivering a hardware‑friendly pipeline that quantizes to ternary masks without any loss of logical correctness.
Overall, the work bridges differentiable learning and exact digital logic design, offering a mathematically grounded, interpretable, and hardware‑ready approach to neuro‑symbolic synthesis. It opens avenues for automated large‑scale circuit generation, ASIC/FPGA deployment, and explainable AI systems that require provably correct Boolean reasoning.
Comments & Academic Discussion
Loading comments...
Leave a Comment