Abstract
Convolutional neural networks (CNNs) have become the power method for many computer vision applications, including image classification and action recognition. However, they are almost computationally and memory intensive, thus are challenging to use and to deploy on systems with limited resources, except for a few recent networks which were specifically designed for mobile and embedded vision applications such as MobileNet, NASNet-Mobile. In this paper, we present a novel efficient algorithm to compress CNN models to decrease the computational cost and the run-time memory footprint. We propose a strategy to measure the redundancy of parameters based on their relationship using the covariance and correlation criteria, and then prune the less important ones. Our method directly applies to CNNs, both on convolutional and fully connected layers, and requires no specialized software/hardware accelerators. The proposed method significantly reduces the model sizes (up to 70%) and thus computing costs without performance loss on different CNN models (AlexNet, ResNet, and LeNet) for image classification on different datasets (MNIST, CIFAR10, and ImageNet) as well as for human action recognition (on dataset like the UCF101).