Custom Algorithm-based Fault Tolerance for Attention Layers in Transformers
Reading time: 1 minute
...
📝 Original Info
- Title: Custom Algorithm-based Fault Tolerance for Attention Layers in Transformers
- ArXiv ID: 2507.16676
- Date: 2025-07-22
- Authors: 저자:
📝 Abstract
Transformers and large language models (LLMs), powered by the attention mechanism, have transformed numerous AI applications, driving the need for specialized hardware accelerators. A major challenge in these accelerators is efficiently detecting errors caused by random hardware faults. Traditional algorithm-based fault tolerance (ABFT) techniques verify individual matrix multiplications but fall short in handling the full attention mechanism, particularly due to intermediate softmax normalization. This work proposes Flash-ABFT, a novel method that computes an online checksum across the entire three-matrix product of query, key and value matrices, of an attention layer, including the softmax operation, with a single check. This approach significantly reduces overhead by eliminating redundant checks while maintaining high fault-detection accuracy. Experimental results demonstrate that Flash-ABFT incurs only 5.3% hardware area overhead and less than 1.9% energy overhead, making it a cost-effective and robust solution for error detection in attention accelerators.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.