14.5.8.6.5 Neural Net Pruning

Chapter Contents (Back)
CNN. Pruning. Efficient Implementation.

Chen, S.[Shi], Zhao, Q.[Qi],
Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations,
PAMI(41), No. 12, December 2019, pp. 3048-3056.
IEEE DOI 1911
Computational modeling, Computational efficiency, Feature extraction, Task analysis, Convolutional neural networks, convolutional neural networks BibRef

Singh, P.[Pravendra], Kadi, V.S.R.[Vinay Sameer Raja], Namboodiri, V.P.[Vinay P.],
FALF ConvNets: Fatuous auxiliary loss based filter-pruning for efficient deep CNNs,
IVC(93), 2020, pp. 103857.
Elsevier DOI 2001
Filter pruning, Model compression, Convolutional neural network, Image recognition, Deep learning BibRef

Singh, P.[Pravendra], Kadi, V.S.R.[Vinay Sameer Raja], Verma, N., Namboodiri, V.P.[Vinay P.],
Stability Based Filter Pruning for Accelerating Deep CNNs,
WACV19(1166-1174)
IEEE DOI 1904
computer networks, graphics processing units, learning (artificial intelligence), neural nets, Libraries BibRef

Mittal, D.[Deepak], Bhardwaj, S.[Shweta], Khapra, M.M.[Mitesh M.], Ravindran, B.[Balaraman],
Studying the plasticity in deep convolutional neural networks using random pruning,
MVA(30), No. 2, March 2019, pp. 203-216.
Springer DOI 1904
BibRef
Earlier:
Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks,
WACV18(848-857)
IEEE DOI 1806
image classification, learning (artificial intelligence), neural nets, object detection, RCNN model, class specific pruning, Tuning BibRef

Bhardwaj, S.[Shweta], Srinivasan, M.[Mukundhan], Khapra, M.M.[Mitesh M.],
Efficient Video Classification Using Fewer Frames,
CVPR19(354-363).
IEEE DOI 2002
BibRef

Yang, W.Z.[Wen-Zhu], Jin, L.L.[Li-Lei], Wang, S.[Sile], Cu, Z.C.[Zhen-Chao], Chen, X.Y.[Xiang-Yang], Chen, L.P.[Li-Ping],
Thinning of convolutional neural network with mixed pruning,
IET-IPR(13), No. 5, 18 April 2019, pp. 779-784.
DOI Link 1904
BibRef

Luo, J.H.[Jian-Hao], Zhang, H.[Hao], Zhou, H.Y.[Hong-Yu], Xie, C.W.[Chen-Wei], Wu, J.X.[Jian-Xin], Lin, W.Y.[Wei-Yao],
ThiNet: Pruning CNN Filters for a Thinner Net,
PAMI(41), No. 10, October 2019, pp. 2525-2538.
IEEE DOI 1909
Convolution, Computational modeling, Task analysis, Acceleration, Training, Neural networks, Image coding, model compression BibRef

Tung, F.[Frederick], Mori, G.[Greg],
Deep Neural Network Compression by In-Parallel Pruning-Quantization,
PAMI(42), No. 3, March 2020, pp. 568-579.
IEEE DOI 2002
BibRef
Earlier:
CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization,
CVPR18(7873-7882)
IEEE DOI 1812
Quantization (signal), Image coding, Neural networks, Visualization, Training, Convolution, Network architecture, Bayesian optimization. Training, Task analysis, Optimization BibRef

Ide, H.[Hidenori], Kobayashi, T.[Takumi], Watanabe, K.[Kenji], Kurita, T.[Takio],
Robust pruning for efficient CNNs,
PRL(135), 2020, pp. 90-98.
Elsevier DOI 2006
CNN, Pruning, Empirical classification loss, Taylor expansion BibRef

Kang, H.,
Accelerator-Aware Pruning for Convolutional Neural Networks,
CirSysVideo(30), No. 7, July 2020, pp. 2093-2103.
IEEE DOI 2007
Accelerator architectures, Field programmable gate arrays, Convolutional codes, Acceleration, Convolutional neural networks, neural network accelerator BibRef

Tsai, C.Y.[Chun-Ya], Gao, D.Q.[De-Qin], Ruan, S.J.[Shanq-Jang],
An effective hybrid pruning architecture of dynamic convolution for surveillance videos,
JVCIR(70), 2020, pp. 102798.
Elsevier DOI 2007
Optimize CNN, Dynamic convolution, Pruning, Smart surveillance application BibRef

Wang, Z., Hong, W., Tan, Y., Yuan, J.,
Pruning 3D Filters For Accelerating 3D ConvNets,
MultMed(22), No. 8, August 2020, pp. 2126-2137.
IEEE DOI 2007
Acceleration, Feature extraction, Task analysis, Maximum Abs. of Filters (MAF) BibRef

Luo, J.H.[Jian-Hao], Wu, J.X.[Jian-Xin],
AutoPruner: An end-to-end trainable filter pruning method for efficient deep model inference,
PR(107), 2020, pp. 107461.
Elsevier DOI 2008
Neural network pruning, Model compression, CNN acceleration BibRef

Wang, W.[Wei], Zhu, L.Q.[Li-Qiang],
Structured feature sparsity training for convolutional neural network compression,
JVCIR(71), 2020, pp. 102867.
Elsevier DOI 2009
Convolutional neural network, CNN compression, Structured sparsity, Pruning criterion BibRef


Wang, Y.[Ying], Lu, Y.D.[Ya-Dong], Blankevoort, T.[Tijmen],
Differentiable Joint Pruning and Quantization for Hardware Efficiency,
ECCV20(XXIX: 259-277).
Springer DOI 2010
BibRef

Cai, Y.H.[Yao-Hui], Yao, Z.W.[Zhe-Wei], Dong, Z.[Zhen], Gholami, A.[Amir], Mahoney, M.W.[Michael W.], Keutzer, K.[Kurt],
ZeroQ: A Novel Zero Shot Quantization Framework,
CVPR20(13166-13175)
IEEE DOI 2008
Quantization (signal), Training, Computational modeling, Sensitivity, Artificial neural networks, Task analysis, Training data BibRef

Qu, Z., Zhou, Z., Cheng, Y., Thiele, L.,
Adaptive Loss-Aware Quantization for Multi-Bit Networks,
CVPR20(7985-7994)
IEEE DOI 2008
Quantization (signal), Optimization, Neural networks, Adaptive systems, Microprocessors, Training, Tensile stress BibRef

Jin, Q., Yang, L., Liao, Z.,
AdaBits: Neural Network Quantization With Adaptive Bit-Widths,
CVPR20(2143-2153)
IEEE DOI 2008
Adaptation models, Quantization (signal), Training, Neural networks, Biological system modeling, Adaptive systems BibRef

Zhu, F.[Feng], Gong, R.H.[Rui-Hao], Yu, F.W.[Feng-Wei], Liu, X.L.[Xiang-Long], Wang, Y.F.[Yan-Fei], Li, Z.L.[Zhe-Long], Yang, X.Q.[Xiu-Qi], Yan, J.J.[Jun-Jie],
Towards Unified INT8 Training for Convolutional Neural Network,
CVPR20(1966-1976)
IEEE DOI 2008
Training, Quantization (signal), Convergence, Acceleration, Computer crashes, Optimization, Task analysis BibRef

Zhuang, B., Liu, L., Tan, M., Shen, C., Reid, I.D.,
Training Quantized Neural Networks With a Full-Precision Auxiliary Module,
CVPR20(1485-1494)
IEEE DOI 2008
Training, Quantization (signal), Object detection, Detectors, Computational modeling, Task analysis, Neural networks BibRef

Yu, H., Wen, T., Cheng, G., Sun, J., Han, Q., Shi, J.,
Low-bit Quantization Needs Good Distribution,
EDLCV20(2909-2918)
IEEE DOI 2008
Quantization (signal), Training, Task analysis, Pipelines, Adaptation models, Computational modeling, Neural networks BibRef

Bhalgat, Y., Lee, J., Nagel, M., Blankevoort, T., Kwak, N.,
LSQ+: Improving low-bit quantization through learnable offsets and better initialization,
EDLCV20(2978-2985)
IEEE DOI 2008
Quantization (signal), Training, Clamps, Neural networks, Artificial intelligence, Computer architecture, Minimization BibRef

Pouransari, H., Tu, Z., Tuzel, O.,
Least squares binary quantization of neural networks,
EDLCV20(2986-2996)
IEEE DOI 2008
Quantization (signal), Computational modeling, Optimization, Tensile stress, Neural networks, Computational efficiency, Approximation algorithms BibRef

Gope, D., Beu, J., Thakker, U., Mattina, M.,
Ternary MobileNets via Per-Layer Hybrid Filter Banks,
EDLCV20(3036-3046)
IEEE DOI 2008
Convolution, Quantization (signal), Computer architecture, Neural networks, Throughput, Hardware, Computational modeling BibRef

Choi, Y., Choi, J., El-Khamy, M., Lee, J.,
Data-Free Network Quantization With Adversarial Knowledge Distillation,
EDLCV20(3047-3057)
IEEE DOI 2008
Generators, Quantization (signal), Training, Computational modeling, Data models, Machine learning, Data privacy BibRef

Li, Y., Gu, S., Mayer, C., Van Gool, L.J., Timofte, R.,
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression,
CVPR20(8015-8024)
IEEE DOI 2008
Matrix decomposition, Convolution, Tensile stress, Fasteners, Matrix converters, Neural networks BibRef

Wang, T., Wang, K., Cai, H., Lin, J., Liu, Z., Wang, H., Lin, Y., Han, S.,
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy,
CVPR20(2075-2084)
IEEE DOI 2008
Quantization (signal), Optimization, Training, Hardware, Pipelines, Biological system modeling, Computer architecture BibRef

Kwon, S.J., Lee, D., Kim, B., Kapoor, P., Park, B., Wei, G.,
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization,
CVPR20(1906-1915)
IEEE DOI 2008
Sparse matrices, Decoding, Quantization (signal), Viterbi algorithm, Bandwidth, Encryption BibRef

Guo, S., Wang, Y., Li, Q., Yan, J.,
DMCP: Differentiable Markov Channel Pruning for Neural Networks,
CVPR20(1536-1544)
IEEE DOI 2008
Markov processes, Computer architecture, Training, Task analysis, Mathematical model, Learning (artificial intelligence), Optimization BibRef

Lin, M., Ji, R., Wang, Y., Zhang, Y., Zhang, B., Tian, Y., Shao, L.,
HRank: Filter Pruning Using High-Rank Feature Map,
CVPR20(1526-1535)
IEEE DOI 2008
Acceleration, Training, Hardware, Adaptive systems, Optimization, Adaptation models, Neural networks BibRef

Guo, J., Ouyang, W., Xu, D.,
Multi-Dimensional Pruning: A Unified Framework for Model Compression,
CVPR20(1505-1514)
IEEE DOI 2008
Tensile stress, Redundancy, Logic gates, Convolution, Solid modeling BibRef

Luo, J., Wu, J.,
Neural Network Pruning With Residual-Connections and Limited-Data,
CVPR20(1455-1464)
IEEE DOI 2008
Training, Computational modeling, Neural networks, Data models, Image coding, Computer vision, Acceleration BibRef

Wu, Y., Liu, C., Chen, B., Chien, S.,
Constraint-Aware Importance Estimation for Global Filter Pruning under Multiple Resource Constraints,
EDLCV20(2935-2943)
IEEE DOI 2008
Estimation, Computational modeling, Training, Optimization, Performance evaluation, Taylor series, Computer vision BibRef

Gain, A.[Alex], Kaushik, P.[Prakhar], Siegelmann, H.[Hava],
Adaptive Neural Connections for Sparsity Learning,
WACV20(3177-3182)
IEEE DOI 2006
Training, Neurons, Bayes methods, Biological neural networks, Computer architecture, Kernel, Computer science BibRef

Ramakrishnan, R.K., Sari, E., Nia, V.P.,
Differentiable Mask for Pruning Convolutional and Recurrent Networks,
CRV20(222-229)
IEEE DOI 2006
BibRef

Blakeney, C., Yan, Y., Zong, Z.,
Is Pruning Compression?: Investigating Pruning Via Network Layer Similarity,
WACV20(903-911)
IEEE DOI 2006
Biological neural networks, Neurons, Correlation, Computational modeling, Training, Tools BibRef

Verma, V.K., Singh, P., Namboodiri, V.P., Rai, P.,
A 'Network Pruning Network' Approach to Deep Model Compression,
WACV20(2998-3007)
IEEE DOI 2006
Computational modeling, Task analysis, Adaptation models, Cost function, Computer architecture, Computer science, Iterative methods BibRef

Xiong, Y., Mehta, R., Singh, V.,
Resource Constrained Neural Network Architecture Search: Will a Submodularity Assumption Help?,
ICCV19(1901-1910)
IEEE DOI 2004
learning (artificial intelligence), neural nets, optimisation, neural network architecture search, empirical feedback, Heuristic algorithms BibRef

Ajanthan, T., Dokania, P., Hartley, R., Torr, P.,
Proximal Mean-Field for Neural Network Quantization,
ICCV19(4870-4879)
IEEE DOI 2004
computational complexity, gradient methods, neural nets, optimisation, stochastic processes, proximal mean-field, Labeling BibRef

Gong, R., Liu, X., Jiang, S., Li, T., Hu, P., Lin, J., Yu, F., Yan, J.,
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks,
ICCV19(4851-4860)
IEEE DOI 2004
backpropagation, convolutional neural nets, data compression, image coding, learning (artificial intelligence), Backpropagation BibRef

Heo, B.[Byeongho], Kim, J.[Jeesoo], Yun, S.[Sangdoo], Park, H.[Hyojin], Kwak, N.[Nojun], Choi, J.Y.[Jin Young],
A Comprehensive Overhaul of Feature Distillation,
ICCV19(1921-1930)
IEEE DOI 2004
feature extraction, image classification, image segmentation, object detection, distillation loss, Artificial intelligence BibRef

Mullapudi, R.T., Chen, S., Zhang, K., Ramanan, D., Fatahalian, K.,
Online Model Distillation for Efficient Video Inference,
ICCV19(3572-3581)
IEEE DOI 2004
convolutional neural nets, image segmentation, inference mechanisms, learning (artificial intelligence), Cameras BibRef

Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., Ma, K.,
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation,
ICCV19(3712-3721)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), knowledge distillation, student neural networks, Computational modeling BibRef

Cho, J.H., Hariharan, B.,
On the Efficacy of Knowledge Distillation,
ICCV19(4793-4801)
IEEE DOI 2004
learning (artificial intelligence), neural nets, Probability distribution, teacher architectures, knowledge distillation performance. BibRef

Peng, B., Jin, X., Li, D., Zhou, S., Wu, Y., Liu, J., Zhang, Z., Liu, Y.,
Correlation Congruence for Knowledge Distillation,
ICCV19(5006-5015)
IEEE DOI 2004
correlation methods, face recognition, image classification, learning (artificial intelligence), instance-level information, Knowledge transfer BibRef

Yu, J., Huang, T.,
Universally Slimmable Networks and Improved Training Techniques,
ICCV19(1803-1811)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. image classification, image resolution, learning (artificial intelligence), mobile computing, Testing BibRef

Tung, F.[Fred], Mori, G.[Greg],
Similarity-Preserving Knowledge Distillation,
ICCV19(1365-1374)
IEEE DOI 2004
learning (artificial intelligence), neural nets, semantic networks, Task analysis BibRef

Jin, X.[Xiao], Peng, B.Y.[Bao-Yun], Wu, Y.C.[Yi-Chao], Liu, Y.[Yu], Liu, J.H.[Jia-Heng], Liang, D.[Ding], Yan, J.J.[Jun-Jie], Hu, X.L.[Xiao-Lin],
Knowledge Distillation via Route Constrained Optimization,
ICCV19(1345-1354)
IEEE DOI 2004
face recognition, image classification, learning (artificial intelligence), neural nets, optimisation, Neural networks BibRef

Choukroun, Y., Kravchik, E., Yang, F., Kisilev, P.,
Low-bit Quantization of Neural Networks for Efficient Inference,
CEFRL19(3009-3018)
IEEE DOI 2004
inference mechanisms, learning (artificial intelligence), mean square error methods, neural nets, quantisation (signal), MMSE BibRef

Zhao, R., Luk, W.,
Efficient Structured Pruning and Architecture Searching for Group Convolution,
NeruArch19(1961-1970)
IEEE DOI 2004
convolutional neural nets, group theory, network theory (graphs), neural net architecture, search problems, network pruning, efficient inference BibRef

Gao, S., Liu, X., Chien, L., Zhang, W., Alvarez, J.M.,
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks,
CEFRL19(2980-2988)
IEEE DOI 2004
image filtering, neural nets, statistical analysis, CIFAR10, first-order statistics, second-order statistics, residual networks BibRef

Liu, Z., Mu, H., Zhang, X., Guo, Z., Yang, X., Cheng, K., Sun, J.,
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning,
ICCV19(3295-3304)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. learning (artificial intelligence), neural nets, sampling methods, stochastic processes, pruned networks, Task analysis BibRef

Zhou, Y., Zhang, Y., Wang, Y., Tian, Q.,
Accelerate CNN via Recursive Bayesian Pruning,
ICCV19(3305-3314)
IEEE DOI 2004
approximation theory, Bayes methods, computational complexity, computer vision, convolutional neural nets, Markov processes, Computational modeling BibRef

Molchanov, P.[Pavlo], Mallya, A.[Arun], Tyree, S.[Stephen], Frosio, I.[Iuri], Kautz, J.[Jan],
Importance Estimation for Neural Network Pruning,
CVPR19(11256-11264).
IEEE DOI 2002
BibRef

Webster, R.[Ryan], Rabin, J.[Julien], Simon, L.[Loic], Jurie, F.[Frederic],
Detecting Overfitting of Deep Generative Networks via Latent Recovery,
CVPR19(11265-11274).
IEEE DOI 2002
BibRef

Li, X.[Xin], Zhou, Y.M.[Yi-Ming], Pan, Z.[Zheng], Feng, J.[Jiashi],
Partial Order Pruning: For Best Speed/Accuracy Trade-Off in Neural Architecture Search,
CVPR19(9137-9145).
IEEE DOI 2002
BibRef

Lemaire, C.[Carl], Achkar, A.[Andrew], Jodoin, P.M.[Pierre-Marc],
Structured Pruning of Neural Networks With Budget-Aware Regularization,
CVPR19(9100-9108).
IEEE DOI 2002
BibRef

Ding, X.[Xiaohan], Ding, G.[Guiguang], Guo, Y.[Yuchen], Han, J.G.[Jun-Gong],
Centripetal SGD for Pruning Very Deep Convolutional Networks With Complicated Structure,
CVPR19(4938-4948).
IEEE DOI 2002
BibRef

He, Y.[Yang], Ding, Y.H.[Yu-Hang], Liu, P.[Ping], Zhu, L.C.[Lin-Chao], Zhang, H.W.[Han-Wang], Yang, Y.[Yi],
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration,
CVPR20(2006-2015)
IEEE DOI 2008
Acceleration, Feature extraction, Training, Computer vision, Convolutional neural networks, Benchmark testing, Computer architecture BibRef

He, Y.[Yang], Liu, P.[Ping], Wang, Z.W.[Zi-Wei], Hu, Z.L.[Zhi-Lan], Yang, Y.[Yi],
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration,
CVPR19(4335-4344).
IEEE DOI 2002
BibRef

Zhao, C.L.[Cheng-Long], Ni, B.B.[Bing-Bing], Zhang, J.[Jian], Zhao, Q.[Qiwei], Zhang, W.J.[Wen-Jun], Tian, Q.[Qi],
Variational Convolutional Neural Network Pruning,
CVPR19(2775-2784).
IEEE DOI 2002
BibRef

Lin, S.H.[Shao-Hui], Ji, R.R.[Rong-Rong], Yan, C.Q.[Chen-Qian], Zhang, B.C.[Bao-Chang], Cao, L.J.[Liu-Juan], Ye, Q.X.[Qi-Xiang], Huang, F.Y.[Fei-Yue], Doermann, D.[David],
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning,
CVPR19(2785-2794).
IEEE DOI 2002
BibRef

Mummadi, C.K.[Chaithanya Kumar], Genewein, T.[Tim], Zhang, D.[Dan], Brox, T.[Thomas], Fischer, V.[Volker],
Group Pruning Using a Bounded-Lp Norm for Group Gating and Regularization,
GCPR19(139-155).
Springer DOI 1911
BibRef

Wang, W.T.[Wei-Ting], Li, H.L.[Han-Lin], Lin, W.S.[Wei-Shiang], Chiang, C.M.[Cheng-Ming], Tsai, Y.M.[Yi-Min],
Architecture-Aware Network Pruning for Vision Quality Applications,
ICIP19(2701-2705)
IEEE DOI 1910
Pruning, Vision Quality, Network Architecture BibRef

Zhang, Y.X.[Yu-Xin], Wang, H.A.[Hu-An], Luo, Y.[Yang], Yu, L.[Lu], Hu, H.J.[Hao-Ji], Shan, H.G.[Hang-Guan], Quek, T.Q.S.[Tony Q. S.],
Three-Dimensional Convolutional Neural Network Pruning with Regularization-Based Method,
ICIP19(4270-4274)
IEEE DOI 1910
3D CNN, video analysis, model compression, structured pruning, regularization BibRef

Hu, Y., Li, J., Long, X., Hu, S., Zhu, J., Wang, X., Gu, Q.,
Cluster Regularized Quantization for Deep Networks Compression,
ICIP19(914-918)
IEEE DOI 1910
deep neural networks, object classification, model compression, quantization BibRef

Hu, Y., Sun, S., Li, J., Zhu, J., Wang, X., Gu, Q.,
Multi-Loss-Aware Channel Pruning of Deep Networks,
ICIP19(889-893)
IEEE DOI 1910
deep neural networks, object classification, model compression, channel pruning BibRef

Manessi, F., Rozza, A., Bianco, S., Napoletano, P., Schettini, R.,
Automated Pruning for Deep Neural Network Compression,
ICPR18(657-664)
IEEE DOI 1812
Training, Neural networks, Quantization (signal), Task analysis, Feature extraction, Pipelines, Image coding BibRef

Yu, R., Li, A., Chen, C., Lai, J., Morariu, V.I., Han, X., Gao, M., Lin, C., Davis, L.S.,
NISP: Pruning Networks Using Neuron Importance Score Propagation,
CVPR18(9194-9203)
IEEE DOI 1812
Neurons, Redundancy, Optimization, Acceleration, Biological neural networks, Task analysis, Feature extraction BibRef

Zhang, T.[Tianyun], Ye, S.[Shaokai], Zhang, K.Q.[Kai-Qi], Tang, J.[Jian], Wen, W.[Wujie], Fardad, M.[Makan], Wang, Y.Z.[Yan-Zhi],
A Systematic DNN Weight Pruning Framework Using Alternating Direction Method of Multipliers,
ECCV18(VIII: 191-207).
Springer DOI 1810
BibRef

Huang, Q., Zhou, K., You, S., Neumann, U.,
Learning to Prune Filters in Convolutional Neural Networks,
WACV18(709-718)
IEEE DOI 1806
computer vision, image segmentation, learning (artificial intelligence), neural nets, CNN filters, Training BibRef

Carreira-Perpinan, M.A., Idelbayev, Y.,
'Learning-Compression' Algorithms for Neural Net Pruning,
CVPR18(8532-8541)
IEEE DOI 1812
Neural networks, Optimization, Training, Neurons, Performance evaluation, Mobile handsets, Quantization (signal) BibRef

Zhou, Z., Zhou, W., Li, H., Hong, R.,
Online Filter Clustering and Pruning for Efficient Convnets,
ICIP18(11-15)
IEEE DOI 1809
Training, Acceleration, Neural networks, Convolution, Tensile stress, Force, Clustering algorithms, Deep neural networks, similar filter, cluster loss BibRef

Zhu, L.G.[Li-Geng], Deng, R.Z.[Rui-Zhi], Maire, M.[Michael], Deng, Z.W.[Zhi-Wei], Mori, G.[Greg], Tan, P.[Ping],
Sparsely Aggregated Convolutional Networks,
ECCV18(XII: 192-208).
Springer DOI 1810
BibRef

Wang, Z., Zhu, C., Xia, Z., Guo, Q., Liu, Y.,
Towards thinner convolutional neural networks through gradually global pruning,
ICIP17(3939-3943)
IEEE DOI 1803
Computational modeling, Machine learning, Measurement, Neurons, Redundancy, Tensile stress, Training, Artificial neural networks, Deep learning BibRef

Luo, J.H., Wu, J., Lin, W.,
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression,
ICCV17(5068-5076)
IEEE DOI 1802
data compression, image coding, image filtering, inference mechanisms, neural nets, optimisation, Training BibRef

Rueda, F.M.[Fernando Moya], Grzeszick, R.[Rene], Fink, G.A.[Gernot A.],
Neuron Pruning for Compressing Deep Networks Using Maxout Architectures,
GCPR17(177-188).
Springer DOI 1711
BibRef

Yang, T.J.[Tien-Ju], Chen, Y.H.[Yu-Hsin], Sze, V.[Vivienne],
Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning,
CVPR17(6071-6079)
IEEE DOI 1711
Computational modeling, Energy consumption, Estimation, Hardware, Measurement, Memory management, Smart, phones BibRef

Guo, J.[Jia], Potkonjak, M.[Miodrag],
Pruning ConvNets Online for Efficient Specialist Models,
ECVW17(430-437)
IEEE DOI 1709
Biological neural networks, Computational modeling, Computer vision, Convolution, Memory management, Sensitivity, analysis BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Forgetting, Explaination, Intrepretation, Understanding of Convolutional Neural Networks .


Last update:Oct 19, 2020 at 15:02:28