Research on Model Compression Based on Convolutional Neural Network
Download as PDF
Deep learning methods have achieved remarkable success in the variety of applications with various variants. The most popular instance is perhaps Convolutional neural networks(CNN) consisting numerous of numbers of convolutional layers to proceed image based input to yield desired output. Typically, CNNs contains enormous number of parameters and requires huge number of float operations for inference. Hence how to filter out redundant parameters become more and more necessary. In this paper, we study how to compress CNN architectures based on sparsity-inducing regularization optimization. We validate the method on one benchmark architecture VGG16 and dataset MNIST.
Convolutional neural network, Deep learning, Benchmark architecture vgg16, Data set mnist