Exponential discretization of weights of neural network connections in pre-trained neural networks
To reduce random access memory (RAM) requirements and to increase speed of recognition algorithms we consider a weight discretization problem for trained neural networks. We show that an exponential discretization is preferable to a linear discretization since it allows one to achieve the same accuracy when the number of bits is 1 or 2 less. The quality of the neural network VGG-16 is already satisfactory (top5 accuracy 69%) in the case of 3 bit exponential discretization. The ResNet50 neural network shows top5 accuracy 84% at 4 bits. Other neural networks perform fairly well at 5 bits (top5 accuracies of Xception, Inception-v3, and MobileNet-v2 top5 were 87%, 90%, and 77%, respectively). At less number of bits, the accuracy decreases rapidly.
💡 Research Summary
The paper addresses the problem of reducing the memory footprint and inference speed of pre‑trained deep neural networks by quantizing their weights without any additional retraining. The authors compare two partitioning schemes for the weight interval (
Comments & Academic Discussion
Loading comments...
Leave a Comment