LogicSparse: Enabling Engine-Free Unstructured Sparsity for Quantised Deep-learning Accelerators

Reading time: 1 minute
...

📝 Original Info

  • Title: LogicSparse: Enabling Engine-Free Unstructured Sparsity for Quantised Deep-learning Accelerators
  • ArXiv ID: 2511.03079
  • Date: 2025-11-05
  • Authors: ** 논문에 저자 정보가 제공되지 않았습니다. **

📝 Abstract

FPGAs have been shown to be a promising platform for deploying Quantised Neural Networks (QNNs) with high-speed, low-latency, and energy-efficient inference. However, the complexity of modern deep-learning models limits the performance on resource-constrained edge devices. While quantisation and pruning alleviate these challenges, unstructured sparsity remains underexploited due to irregular memory access. This work introduces a framework that embeds unstructured sparsity into dataflow accelerators, eliminating the need for dedicated sparse engines and preserving parallelism. A hardware-aware pruning strategy is introduced to improve efficiency and design flow further. On LeNet-5, the framework attains 51.6 x compression and 1.23 x throughput improvement using only 5.12% of LUTs, effectively exploiting unstructured sparsity for QNN acceleration.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut