MemoryFormer: Minimize Transformer Computation by Removing Fully-Connected Layers

MemoryFormer: Minimize Transformer Computation by Removing Fully-Connected Layers
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In order to reduce the computational complexity of large language models, great efforts have been made to to improve the efficiency of transformer models such as linear attention and flash-attention. However, the model size and corresponding computational complexity are constantly scaled up in pursuit of higher performance. In this work, we present MemoryFormer, a novel transformer architecture which significantly reduces the computational complexity (FLOPs) from a new perspective. We eliminate nearly all the computations of the transformer model except for the necessary computation required by the multi-head attention operation. This is made possible by utilizing an alternative method for feature transformation to replace the linear projection of fully-connected layers. Specifically, we first construct a group of in-memory lookup tables that store a large amount of discrete vectors to replace the weight matrix used in linear projection. We then use a hash algorithm to retrieve a correlated subset of vectors dynamically based on the input embedding. The retrieved vectors combined together will form the output embedding, which provides an estimation of the result of matrix multiplication operation in a fully-connected layer. Compared to conducting matrix multiplication, retrieving data blocks from memory is a much cheaper operation which requires little computations. We train MemoryFormer from scratch and conduct extensive experiments on various benchmarks to demonstrate the effectiveness of the proposed model.


💡 Research Summary

MemoryFormer introduces a fundamentally new way to cut the computational cost of transformer models by replacing the fully‑connected (FC) layers with a learnable memory‑lookup mechanism. The authors observe that, for most practical sequence lengths, the FC layers in the feed‑forward network dominate the FLOPs budget, while the multi‑head attention (MHA) accounts for a relatively small portion unless the sequence is extremely long (s > 6·d). To address this imbalance, MemoryFormer discards all FC layers and substitutes them with “Memory Layers” that perform a hashing‑and‑retrieval operation instead of matrix multiplication.

Each input token embedding x ∈ ℝ^d is split into K non‑overlapping sub‑vectors z_k of size τ = d/K. For each sub‑vector a simple sign‑based locality‑sensitive hash (LSH) is computed: the sign of each dimension yields a binary code, which is then interpreted as an integer index h(z_k). A dedicated hash table T_k of size 2^τ × h stores learnable vectors. The entry indexed by h(z_k) is retrieved, and a weight p(z_k) – derived from a softmax over scaled cosine similarities between z_k and all possible τ‑bit codes – is applied. The final output of the Memory Layer is the sum over k of p(z_k)·T_k


Comments & Academic Discussion

Loading comments...

Leave a Comment