A Quantitative Evaluation of Approximate Softmax Functions for Deep Neural Networks
📝 Original Info
- Title: A Quantitative Evaluation of Approximate Softmax Functions for Deep Neural Networks
- ArXiv ID: 2501.13379
- Date: 2025-01-23
- Authors: ** 정보 없음 (제공된 원문에 저자 정보가 포함되어 있지 않음) **
📝 Abstract
The softmax function is a widely used activation function in the output layers of neural networks, responsible for converting raw scores into class probabilities while introducing essential non-linearity. Implementing Softmax efficiently poses challenges on low-end FPGAs due to limited hardware resources and the computational complexity of exponential and division operations. This work evaluates approximate computing techniques for softmax acceleration using Taylor series and interpolation methods using Look-Up Tables (LUTs). These approximations aim to reduce execution time and resource consumption while maintaining acceptable levels of numerical precision. Our findings show that quadratic interpolation with LUTs yields the lowest numerical error. In contrast, Taylor-based approximations offer significantly better performance in terms of execution time and resource efficiency due to their computational simplicity. When applied to real-world deep learning models such as LeNet-5 and MobileNet v2, the first- and second-order Taylor approximations provided substantial trade-offs between accuracy and resource savings, achieving up to 0.2% accuracy degradation and 14% resource reduction compared to exact implementations. These results highlight the effectiveness of approximate Softmax designs on resource-constrained FPGAs and lay the groundwork for their integration into larger models, including large language models (LLMs).💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.