14.5.8.6.7 Neural Net Compression

Chapter Contents (Back)
CNN. Compression. Efficient Implementation.
See also Neural Net Pruning.

Tung, F.[Frederick], Mori, G.[Greg],
Deep Neural Network Compression by In-Parallel Pruning-Quantization,
PAMI(42), No. 3, March 2020, pp. 568-579.
IEEE DOI 2002
BibRef
Earlier:
CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization,
CVPR18(7873-7882)
IEEE DOI 1812
Quantization (signal), Image coding, Neural networks, Visualization, Training, Convolution, Network architecture, Bayesian optimization. Training, Task analysis, Optimization BibRef

Wang, W.[Wei], Zhu, L.Q.[Li-Qiang],
Structured feature sparsity training for convolutional neural network compression,
JVCIR(71), 2020, pp. 102867.
Elsevier DOI 2009
Convolutional neural network, CNN compression, Structured sparsity, Pruning criterion BibRef

Kaplan, C.[Cagri], Bulbul, A.[Abdullah],
Goal driven network pruning for object recognition,
PR(110), 2021, pp. 107468.
Elsevier DOI 2011
Deep learning, Computer vision, Network pruning, Network compressing, Top-down attention, Perceptual visioning BibRef


Bui, K.[Kevin], Park, F.[Fredrick], Zhang, S.[Shuai], Qi, Y.[Yingyong], Xin, J.[Jack],
Nonconvex Regularization for Network Slimming: Compressing CNNS Even More,
ISVC20(I:39-53).
Springer DOI 2103
BibRef

de Vieilleville, F., Lagrange, A., Ruiloba, R., May, S.,
Towards Distillation of Deep Neural Networks for Satellite On-board Image Segmentation,
ISPRS20(B2:1553-1559).
DOI Link 2012
BibRef

Wang, X.B.[Xiao-Bo], Fu, T.Y.[Tian-Yu], Liao, S.C.[Sheng-Cai], Wang, S.[Shuo], Lei, Z.[Zhen], Mei, T.[Tao],
Exclusivity-Consistency Regularized Knowledge Distillation for Face Recognition,
ECCV20(XXIV:325-342).
Springer DOI 2012
BibRef

Dbouk, H.[Hassan], Sanghvi, H.[Hetul], Mehendale, M.[Mahesh], Shanbhag, N.[Naresh],
DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural Networks,
ECCV20(XXVII:90-106).
Springer DOI 2011
BibRef

Guan, Y.S.[Yu-Shuo], Zhao, P.Y.[Peng-Yu], Wang, B.X.[Bing-Xuan], Zhang, Y.X.[Yuan-Xing], Yao, C.[Cong], Bian, K.G.[Kai-Gui], Tang, J.[Jian],
Differentiable Feature Aggregation Search for Knowledge Distillation,
ECCV20(XVII:469-484).
Springer DOI 2011
BibRef

do Nascimento, M.G.[Marcelo Gennari], Costain, T.W.[Theo W.], Prisacariu, V.A.[Victor Adrian],
Finding Non-uniform Quantization Schemes Using Multi-task Gaussian Processes,
ECCV20(XVII:383-398).
Springer DOI 2011
BibRef

Seddik, M.E.A., Essafi, H., Benzine, A., Tamaazousti, M.,
Lightweight Neural Networks From PCA LDA Based Distilled Dense Neural Networks,
ICIP20(3060-3064)
IEEE DOI 2011
Neural networks, Principal component analysis, Computational modeling, Training, Machine learning, Lightweight Networks BibRef

Suzuki, S., Takagi, M., Takeda, S., Tanida, R., Kimata, H.,
Deep Feature Compression With Spatio-Temporal Arranging for Collaborative Intelligence,
ICIP20(3099-3103)
IEEE DOI 2011
Image coding, Correlation, Cloud computing, Quantization (signal), Image edge detection, Collaborative intelligence, spatio-temporal arranging BibRef

Neumann, D., Sattler, F., Kirchhoffer, H., Wiedemann, S., Müller, K., Schwarz, H., Wiegand, T., Marpe, D., Samek, W.,
Deepcabac: Plug Play Compression of Neural Network Weights and Weight Updates,
ICIP20(21-25)
IEEE DOI 2011
Artificial neural networks, Quantization (signal), Image coding, Training, Servers, Compression algorithms, Neural Networks, Distributed Training BibRef

Haase, P., Schwarz, H., Kirchhoffer, H., Wiedemann, S., Marinc, T., Marban, A., Müller, K., Samek, W., Marpe, D., Wiegand, T.,
Dependent Scalar Quantization For Neural Network Compression,
ICIP20(36-40)
IEEE DOI 2011
Quantization (signal), Indexes, Neural networks, Context modeling, Entropy coding, Image reconstruction, neural network compression BibRef

Wang, H.T.[Hao-Tao], Gui, S.P.[Shu-Peng], Yang, H.C.[Hai-Chuan], Liu, J.[Ji], Wang, Z.Y.[Zhang-Yang],
GAN Slimming: All-in-one GAN Compression by a Unified Optimization Framework,
ECCV20(IV:54-73).
Springer DOI 2011
BibRef

Kwon, S.J., Lee, D., Kim, B., Kapoor, P., Park, B., Wei, G.,
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization,
CVPR20(1906-1915)
IEEE DOI 2008
Sparse matrices, Decoding, Quantization (signal), Viterbi algorithm, Bandwidth, Encryption BibRef

Guo, J., Ouyang, W., Xu, D.,
Multi-Dimensional Pruning: A Unified Framework for Model Compression,
CVPR20(1505-1514)
IEEE DOI 2008
Tensile stress, Redundancy, Logic gates, Convolution, Solid modeling BibRef

Gong, R., Liu, X., Jiang, S., Li, T., Hu, P., Lin, J., Yu, F., Yan, J.,
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks,
ICCV19(4851-4860)
IEEE DOI 2004
backpropagation, convolutional neural nets, data compression, image coding, learning (artificial intelligence), Backpropagation BibRef

Heo, B.[Byeongho], Kim, J.[Jeesoo], Yun, S.[Sangdoo], Park, H.[Hyojin], Kwak, N.[Nojun], Choi, J.Y.[Jin Young],
A Comprehensive Overhaul of Feature Distillation,
ICCV19(1921-1930)
IEEE DOI 2004
feature extraction, image classification, image segmentation, object detection, distillation loss, Artificial intelligence BibRef

Mullapudi, R.T., Chen, S., Zhang, K., Ramanan, D., Fatahalian, K.,
Online Model Distillation for Efficient Video Inference,
ICCV19(3572-3581)
IEEE DOI 2004
convolutional neural nets, image segmentation, inference mechanisms, learning (artificial intelligence), Cameras BibRef

Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., Ma, K.,
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation,
ICCV19(3712-3721)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), knowledge distillation, student neural networks, Computational modeling BibRef

Cho, J.H., Hariharan, B.,
On the Efficacy of Knowledge Distillation,
ICCV19(4793-4801)
IEEE DOI 2004
learning (artificial intelligence), neural nets, Probability distribution, teacher architectures, knowledge distillation performance. BibRef

Peng, B., Jin, X., Li, D., Zhou, S., Wu, Y., Liu, J., Zhang, Z., Liu, Y.,
Correlation Congruence for Knowledge Distillation,
ICCV19(5006-5015)
IEEE DOI 2004
correlation methods, face recognition, image classification, learning (artificial intelligence), instance-level information, Knowledge transfer BibRef

Yu, J., Huang, T.,
Universally Slimmable Networks and Improved Training Techniques,
ICCV19(1803-1811)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. image classification, image resolution, learning (artificial intelligence), mobile computing, Testing BibRef

Tung, F.[Fred], Mori, G.[Greg],
Similarity-Preserving Knowledge Distillation,
ICCV19(1365-1374)
IEEE DOI 2004
learning (artificial intelligence), neural nets, semantic networks, Task analysis BibRef

Zhang, M.Y.[Man-Yuan], Song, G.L.[Guang-Lu], Zhou, H.[Hang], Liu, Y.[Yu],
Discriminability Distillation in Group Representation Learning,
ECCV20(X:1-19).
Springer DOI 2011
BibRef

Jin, X.[Xiao], Peng, B.Y.[Bao-Yun], Wu, Y.C.[Yi-Chao], Liu, Y.[Yu], Liu, J.H.[Jia-Heng], Liang, D.[Ding], Yan, J.J.[Jun-Jie], Hu, X.L.[Xiao-Lin],
Knowledge Distillation via Route Constrained Optimization,
ICCV19(1345-1354)
IEEE DOI 2004
face recognition, image classification, learning (artificial intelligence), neural nets, optimisation, Neural networks BibRef

Choukroun, Y., Kravchik, E., Yang, F., Kisilev, P.,
Low-bit Quantization of Neural Networks for Efficient Inference,
CEFRL19(3009-3018)
IEEE DOI 2004
inference mechanisms, learning (artificial intelligence), mean square error methods, neural nets, quantisation (signal), MMSE BibRef

Hu, Y., Li, J., Long, X., Hu, S., Zhu, J., Wang, X., Gu, Q.,
Cluster Regularized Quantization for Deep Networks Compression,
ICIP19(914-918)
IEEE DOI 1910
deep neural networks, object classification, model compression, quantization BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Forgetting, Explaination, Intrepretation, Understanding of Convolutional Neural Networks .


Last update:Mar 3, 2021 at 15:01:44