Reservoir Computing inspired Matrix Multiplication-free Language Model

Reading time: 1 minute
...

📝 Original Info

  • Title: Reservoir Computing inspired Matrix Multiplication-free Language Model
  • ArXiv ID: 2512.23145
  • Date: 2025-12-29
  • Authors: Takumi Shiratsuchi, Yuichiro Tanaka, Hakaru Tamukoh

📝 Abstract

Large language models (LLMs) have achieved stateof-the-art performance in natural language processing; however, their high computational cost remains a major bottleneck. In this study, we target computational efficiency by focusing on a matrix multiplication free language model (MatMul-free LM) and further reducing the training cost through an architecture inspired by reservoir computing. Specifically, we partially fix and share the weights of selected layers in the MatMul-free LM and insert reservoir layers to obtain rich dynamic representations without additional training overhead. Additionally, several operations are combined to reduce memory accesses. Experimental results show that the proposed architecture reduces the number of parameters by up to 19%, training time by 9.9%, and inference time by 8.0%, while maintaining comparable performance to the baseline model.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut