Performance Guaranteed Network Acceleration via High-Order Residual Quantization
Scholars from Shanghai Jiaotong Universities has proposed a new method for compressing the neural networks. Their method is able to reduce the size of the network to 1/30
while maintaining the accuracies.