14.5.8.6.7 Neural Net Compression

Chapter Contents (Back)
CNN. Compression. Efficient Implementation.
See also Neural Net Pruning.
See also Knowledge Distillation.
See also Neural Net Quantization.

Wang, W.[Wei], Zhu, L.Q.[Li-Qiang],
Structured feature sparsity training for convolutional neural network compression,
JVCIR(71), 2020, pp. 102867.
Elsevier DOI 2009
Convolutional neural network, CNN compression, Structured sparsity, Pruning criterion BibRef

Kaplan, C.[Cagri], Bulbul, A.[Abdullah],
Goal driven network pruning for object recognition,
PR(110), 2021, pp. 107468.
Elsevier DOI 2011
Deep learning, Computer vision, Network pruning, Network compressing, Top-down attention, Perceptual visioning BibRef

Yao, K.X.[Kai-Xuan], Cao, F.L.[Fei-Long], Leung, Y.[Yee], Liang, J.[Jiye],
Deep neural network compression through interpretability-based filter pruning,
PR(119), 2021, pp. 108056.
Elsevier DOI 2106
Deep neural network (DNN), Convolutional neural network (CNN), Visualization, Compression BibRef

Gowdra, N.[Nidhi], Sinha, R.[Roopak], MacDonell, S.[Stephen], Yan, W.Q.[Wei Qi],
Mitigating severe over-parameterization in deep convolutional neural networks through forced feature abstraction and compression with an entropy-based heuristic,
PR(119), 2021, pp. 108057.
Elsevier DOI 2106
Convolutional neural networks (CNNs), Depth redundancy, Entropy, Feature compression, EBCLE BibRef

Zhang, H.[Huijie], An, L.[Li], Chu, V.W.[Vena W.], Stow, D.A.[Douglas A.], Liu, X.B.[Xiao-Bai], Ding, Q.H.[Qing-Hua],
Learning Adjustable Reduced Downsampling Network for Small Object Detection in Urban Environments,
RS(13), No. 18, 2021, pp. xx-yy.
DOI Link 2109
BibRef


Aghli, N.[Nima], Ribeiro, E.[Eraldo],
Combining Weight Pruning and Knowledge Distillation For CNN Compression,
EVW21(3185-3192)
IEEE DOI 2109
Image coding, Neurons, Estimation, Graphics processing units, Computer architecture, Real-time systems, Convolutional neural networks BibRef

Ran, J.[Jie], Lin, R.[Rui], So, H.K.H.[Hayden K.H.], Chesi, G.[Graziano], Wong, N.[Ngai],
Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks,
ICPR21(9866-9873)
IEEE DOI 2105
Training, Tensors, Neural networks, Redundancy, Games, Elasticity, Minimization BibRef

Shah, M.A.[Muhammad A.], Olivier, R.[Raphael], Raj, B.[Bhiksha],
Exploiting Non-Linear Redundancy for Neural Model Compression,
ICPR21(9928-9935)
IEEE DOI 2105
Training, Image coding, Computational modeling, Neurons, Transfer learning, Redundancy, Nonlinear filters BibRef

Bui, K.[Kevin], Park, F.[Fredrick], Zhang, S.[Shuai], Qi, Y.[Yingyong], Xin, J.[Jack],
Nonconvex Regularization for Network Slimming: Compressing CNNS Even More,
ISVC20(I:39-53).
Springer DOI 2103
BibRef

Wang, H.T.[Hao-Tao], Gui, S.P.[Shu-Peng], Yang, H.C.[Hai-Chuan], Liu, J.[Ji], Wang, Z.Y.[Zhang-Yang],
GAN Slimming: All-in-one GAN Compression by a Unified Optimization Framework,
ECCV20(IV:54-73).
Springer DOI 2011
BibRef

Guo, J., Ouyang, W., Xu, D.,
Multi-Dimensional Pruning: A Unified Framework for Model Compression,
CVPR20(1505-1514)
IEEE DOI 2008
Tensile stress, Redundancy, Logic gates, Convolution, Solid modeling BibRef

Heo, B.[Byeongho], Kim, J.[Jeesoo], Yun, S.[Sangdoo], Park, H.[Hyojin], Kwak, N.[Nojun], Choi, J.Y.[Jin Young],
A Comprehensive Overhaul of Feature Distillation,
ICCV19(1921-1930)
IEEE DOI 2004
feature extraction, image classification, image segmentation, object detection, distillation loss, Artificial intelligence BibRef

Yu, J., Huang, T.,
Universally Slimmable Networks and Improved Training Techniques,
ICCV19(1803-1811)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. image classification, image resolution, learning (artificial intelligence), mobile computing, Testing BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Quantization .


Last update:Oct 16, 2021 at 11:54:21