Chen, S.[Shi],
Zhao, Q.[Qi],
Shallowing Deep Networks: Layer-Wise Pruning Based on Feature
Representations,
PAMI(41), No. 12, December 2019, pp. 3048-3056.
IEEE DOI
1911
Computational modeling, Computational efficiency,
Feature extraction, Task analysis, Convolutional neural networks,
convolutional neural networks
BibRef
Singh, P.[Pravendra],
Kadi, V.S.R.[Vinay Sameer Raja],
Namboodiri, V.P.[Vinay P.],
FALF ConvNets: Fatuous auxiliary loss based filter-pruning for
efficient deep CNNs,
IVC(93), 2020, pp. 103857.
Elsevier DOI
2001
Filter pruning, Model compression,
Convolutional neural network, Image recognition, Deep learning
BibRef
Singh, P.[Pravendra],
Kadi, V.S.R.[Vinay Sameer Raja],
Verma, N.,
Namboodiri, V.P.[Vinay P.],
Stability Based Filter Pruning for Accelerating Deep CNNs,
WACV19(1166-1174)
IEEE DOI
1904
computer networks, graphics processing units,
learning (artificial intelligence), neural nets,
Libraries
BibRef
Mittal, D.[Deepak],
Bhardwaj, S.[Shweta],
Khapra, M.M.[Mitesh M.],
Ravindran, B.[Balaraman],
Studying the plasticity in deep convolutional neural networks using
random pruning,
MVA(30), No. 2, March 2019, pp. 203-216.
Springer DOI
1904
BibRef
Earlier:
Recovering from Random Pruning: On the Plasticity of Deep
Convolutional Neural Networks,
WACV18(848-857)
IEEE DOI
1806
image classification, learning (artificial intelligence),
neural nets, object detection, RCNN model, class specific pruning,
Tuning
BibRef
Bhardwaj, S.[Shweta],
Srinivasan, M.[Mukundhan],
Khapra, M.M.[Mitesh M.],
Efficient Video Classification Using Fewer Frames,
CVPR19(354-363).
IEEE DOI
2002
BibRef
Yang, W.Z.[Wen-Zhu],
Jin, L.L.[Li-Lei],
Wang, S.[Sile],
Cu, Z.C.[Zhen-Chao],
Chen, X.Y.[Xiang-Yang],
Chen, L.P.[Li-Ping],
Thinning of convolutional neural network with mixed pruning,
IET-IPR(13), No. 5, 18 April 2019, pp. 779-784.
DOI Link
1904
BibRef
Luo, J.H.[Jian-Hao],
Zhang, H.[Hao],
Zhou, H.Y.[Hong-Yu],
Xie, C.W.[Chen-Wei],
Wu, J.X.[Jian-Xin],
Lin, W.Y.[Wei-Yao],
ThiNet: Pruning CNN Filters for a Thinner Net,
PAMI(41), No. 10, October 2019, pp. 2525-2538.
IEEE DOI
1909
Convolution, Computational modeling, Task analysis, Acceleration,
Training, Neural networks, Image coding,
model compression
BibRef
Ide, H.[Hidenori],
Kobayashi, T.[Takumi],
Watanabe, K.[Kenji],
Kurita, T.[Takio],
Robust pruning for efficient CNNs,
PRL(135), 2020, pp. 90-98.
Elsevier DOI
2006
CNN, Pruning, Empirical classification loss, Taylor expansion
BibRef
Kang, H.,
Accelerator-Aware Pruning for Convolutional Neural Networks,
CirSysVideo(30), No. 7, July 2020, pp. 2093-2103.
IEEE DOI
2007
Accelerator architectures, Field programmable gate arrays,
Convolutional codes, Acceleration, Convolutional neural networks,
neural network accelerator
BibRef
Tsai, C.Y.[Chun-Ya],
Gao, D.Q.[De-Qin],
Ruan, S.J.[Shanq-Jang],
An effective hybrid pruning architecture of dynamic convolution for
surveillance videos,
JVCIR(70), 2020, pp. 102798.
Elsevier DOI
2007
Optimize CNN, Dynamic convolution, Pruning, Smart surveillance application
BibRef
Wang, Z.,
Hong, W.,
Tan, Y.,
Yuan, J.,
Pruning 3D Filters For Accelerating 3D ConvNets,
MultMed(22), No. 8, August 2020, pp. 2126-2137.
IEEE DOI
2007
Acceleration, Feature extraction, Task analysis,
Maximum Abs. of Filters (MAF)
BibRef
Luo, J.H.[Jian-Hao],
Wu, J.X.[Jian-Xin],
AutoPruner: An end-to-end trainable filter pruning method for
efficient deep model inference,
PR(107), 2020, pp. 107461.
Elsevier DOI
2008
Neural network pruning, Model compression, CNN acceleration
BibRef
Ding, G.,
Zhang, S.,
Jia, Z.,
Zhong, J.,
Han, J.,
Where to Prune: Using LSTM to Guide Data-Dependent Soft Pruning,
IP(30), 2021, pp. 293-304.
IEEE DOI
2012
Computational modeling, Computer architecture,
Reinforcement learning, Image coding, Training, Convolution, Tensors,
image classification
BibRef
Chu, T.[Tianshu],
Luo, Q.[Qin],
Yang, J.[Jie],
Huang, X.L.[Xiao-Lin],
Mixed-precision quantized neural networks with progressively
decreasing bitwidth,
PR(111), 2021, pp. 107647.
Elsevier DOI
2012
Model compression, Quantized neural networks, Mixed-precision
BibRef
Tian, Q.[Qing],
Arbel, T.[Tal],
Clark, J.J.[James J.],
Task dependent deep LDA pruning of neural networks,
CVIU(203), 2021, pp. 103154.
Elsevier DOI
2101
Deep neural networks pruning,
Deep linear discriminant analysis, Deep feature learning
BibRef
Ye, X.C.[Xu-Cheng],
Dai, P.C.[Peng-Cheng],
Luo, J.Y.[Jun-Yu],
Guo, X.[Xin],
Qi, Y.J.[Ying-Jie],
Yang, J.L.[Jian-Lei],
Chen, Y.R.[Yi-Ran],
Accelerating CNN Training by Pruning Activation Gradients,
ECCV20(XXV:322-338).
Springer DOI
2011
BibRef
Wang, Y.K.[Yi-Kai],
Sun, F.C.[Fu-Chun],
Li, D.[Duo],
Yao, A.B.[An-Bang],
Resolution Switchable Networks for Runtime Efficient Image Recognition,
ECCV20(XV:533-549).
Springer DOI
2011
Code, Network Pruning.
WWW Link. Limit the network to vary image resolution and computation time.
BibRef
Jung, J.,
Kim, J.,
Kim, Y.,
Kim, C.,
Reinforcement Learning-Based Layer-Wise Quantization For Lightweight
Deep Neural Networks,
ICIP20(3070-3074)
IEEE DOI
2011
Quantization (signal), Neural networks,
Learning (artificial intelligence), Computational modeling, Embedded system
BibRef
Lee, M.K.,
Lee, S.,
Lee, S.H.,
Song, B.C.,
Channel Pruning Via Gradient Of Mutual Information For Light-Weight
Convolutional Neural Networks,
ICIP20(1751-1755)
IEEE DOI
2011
Mutual information, Probability distribution, Random variables,
Convolutional neural networks, Linear programming, Uncertainty,
mutual information
BibRef
Geng, X.,
Lin, J.,
Li, S.,
Cascaded Mixed-Precision Networks,
ICIP20(241-245)
IEEE DOI
2011
Neural networks, Quantization (signal), Training,
Network architecture, Optimization, Image coding, Schedules,
Pruning
BibRef
Meyer, M.,
Wiesner, J.,
Rohlfing, C.,
Optimized Convolutional Neural Networks for Video Intra Prediction,
ICIP20(3334-3338)
IEEE DOI
2011
Training, Computer architecture, Complexity theory, Encoding,
Convolutional codes, Convolution, Kernel, video coding,
pruning
BibRef
Fang, J.[Jun],
Shafiee, A.[Ali],
Abdel-Aziz, H.[Hamzah],
Thorsley, D.[David],
Georgiadis, G.[Georgios],
Hassoun, J.H.[Joseph H.],
Post-training Piecewise Linear Quantization for Deep Neural Networks,
ECCV20(II:69-86).
Springer DOI
2011
BibRef
Mousa-Pasandi, M.,
Hajabdollahi, M.,
Karimi, N.,
Samavi, S.,
Shirani, S.,
Convolutional Neural Network Pruning Using Filter Attenuation,
ICIP20(2905-2909)
IEEE DOI
2011
Attenuation, Filtering algorithms, Mathematical model,
Computational modeling, Training, Convolutional neural networks,
filter attenuation
BibRef
Elkerdawy, S.,
Elhoushi, M.,
Singh, A.,
Zhang, H.,
Ray, N.,
One-Shot Layer-Wise Accuracy Approximation For Layer Pruning,
ICIP20(2940-2944)
IEEE DOI
2011
Computational modeling, Hardware, Training, Training data,
Graphics processing units, Shape, Sensitivity analysis,
inference speed up
BibRef
Liu, B.L.[Ben-Lin],
Rao, Y.M.[Yong-Ming],
Lu, J.W.[Ji-Wen],
Zhou, J.[Jie],
Hsieh, C.J.[Cho-Jui],
Metadistiller:
Network Self-boosting via Meta-learned Top-down Distillation,
ECCV20(XIV:694-709).
Springer DOI
2011
BibRef
Tian, H.D.[Hong-Duan],
Liu, B.[Bo],
Yuan, X.T.[Xiao-Tong],
Liu, Q.S.[Qing-Shan],
Meta-learning with Network Pruning,
ECCV20(XIX:675-700).
Springer DOI
2011
BibRef
Xie, Z.[Zheng],
Wen, Z.Q.[Zhi-Quan],
Liu, J.[Jing],
Liu, Z.Q.[Zhi-Qiang],
Wu, X.X.[Xi-Xian],
Tan, M.K.[Ming-Kui],
Deep Transferring Quantization,
ECCV20(VIII:625-642).
Springer DOI
2011
BibRef
Li, Y.[Yawei],
Gu, S.H.[Shu-Hang],
Zhang, K.[Kai],
Van Gool, L.J.[Luc J.],
Timofte, R.[Radu],
DHP: Differentiable Meta Pruning via Hypernetworks,
ECCV20(VIII:608-624).
Springer DOI
2011
BibRef
Messikommer, N.[Nico],
Gehrig, D.[Daniel],
Loquercio, A.[Antonio],
Scaramuzza, D.[Davide],
Event-based Asynchronous Sparse Convolutional Networks,
ECCV20(VIII:415-431).
Springer DOI
2011
BibRef
Kim, B.[Byungjoo],
Chudomelka, B.[Bryce],
Park, J.[Jinyoung],
Kang, J.[Jaewoo],
Hong, Y.J.[Young-Joon],
Kim, H.W.J.[Hyun-Woo J.],
Robust Neural Networks Inspired by Strong Stability Preserving
Runge-Kutta Methods,
ECCV20(IX:416-432).
Springer DOI
2011
BibRef
Li, B.L.[Bai-Lin],
Wu, B.[Bowen],
Su, J.[Jiang],
Wang, G.R.[Guang-Run],
Eagleeye: Fast Sub-net Evaluation for Efficient Neural Network Pruning,
ECCV20(II:639-654).
Springer DOI
2011
BibRef
Wang, Y.[Ying],
Lu, Y.D.[Ya-Dong],
Blankevoort, T.[Tijmen],
Differentiable Joint Pruning and Quantization for Hardware Efficiency,
ECCV20(XXIX: 259-277).
Springer DOI
2010
BibRef
Cai, Y.H.[Yao-Hui],
Yao, Z.W.[Zhe-Wei],
Dong, Z.[Zhen],
Gholami, A.[Amir],
Mahoney, M.W.[Michael W.],
Keutzer, K.[Kurt],
ZeroQ: A Novel Zero Shot Quantization Framework,
CVPR20(13166-13175)
IEEE DOI
2008
Quantization (signal), Training, Computational modeling,
Sensitivity, Artificial neural networks, Task analysis, Training data
BibRef
Qu, Z.,
Zhou, Z.,
Cheng, Y.,
Thiele, L.,
Adaptive Loss-Aware Quantization for Multi-Bit Networks,
CVPR20(7985-7994)
IEEE DOI
2008
Quantization (signal), Optimization, Neural networks,
Adaptive systems, Microprocessors, Training, Tensile stress
BibRef
Jin, Q.,
Yang, L.,
Liao, Z.,
AdaBits: Neural Network Quantization With Adaptive Bit-Widths,
CVPR20(2143-2153)
IEEE DOI
2008
Adaptation models, Quantization (signal), Training,
Neural networks, Biological system modeling,
Adaptive systems
BibRef
Zhu, F.[Feng],
Gong, R.H.[Rui-Hao],
Yu, F.W.[Feng-Wei],
Liu, X.L.[Xiang-Long],
Wang, Y.F.[Yan-Fei],
Li, Z.L.[Zhe-Long],
Yang, X.Q.[Xiu-Qi],
Yan, J.J.[Jun-Jie],
Towards Unified INT8 Training for Convolutional Neural Network,
CVPR20(1966-1976)
IEEE DOI
2008
Training, Quantization (signal), Convergence, Acceleration,
Computer crashes, Optimization, Task analysis
BibRef
Zhuang, B.,
Liu, L.,
Tan, M.,
Shen, C.,
Reid, I.D.,
Training Quantized Neural Networks With a Full-Precision Auxiliary
Module,
CVPR20(1485-1494)
IEEE DOI
2008
Training, Quantization (signal), Object detection, Detectors,
Computational modeling, Task analysis, Neural networks
BibRef
Yu, H.,
Wen, T.,
Cheng, G.,
Sun, J.,
Han, Q.,
Shi, J.,
Low-bit Quantization Needs Good Distribution,
EDLCV20(2909-2918)
IEEE DOI
2008
Quantization (signal), Training, Task analysis, Pipelines,
Adaptation models, Computational modeling, Neural networks
BibRef
Bhalgat, Y.,
Lee, J.,
Nagel, M.,
Blankevoort, T.,
Kwak, N.,
LSQ+: Improving low-bit quantization through learnable offsets and
better initialization,
EDLCV20(2978-2985)
IEEE DOI
2008
Quantization (signal), Training, Clamps, Neural networks,
Artificial intelligence, Computer architecture, Minimization
BibRef
Pouransari, H.,
Tu, Z.,
Tuzel, O.,
Least squares binary quantization of neural networks,
EDLCV20(2986-2996)
IEEE DOI
2008
Quantization (signal), Computational modeling, Optimization,
Tensile stress, Neural networks, Computational efficiency,
Approximation algorithms
BibRef
Gope, D.,
Beu, J.,
Thakker, U.,
Mattina, M.,
Ternary MobileNets via Per-Layer Hybrid Filter Banks,
EDLCV20(3036-3046)
IEEE DOI
2008
Convolution, Quantization (signal), Computer architecture,
Neural networks, Throughput, Hardware, Computational modeling
BibRef
Choi, Y.,
Choi, J.,
El-Khamy, M.,
Lee, J.,
Data-Free Network Quantization With Adversarial Knowledge
Distillation,
EDLCV20(3047-3057)
IEEE DOI
2008
Generators, Quantization (signal), Training,
Computational modeling, Data models, Machine learning, Data privacy
BibRef
Li, Y.,
Gu, S.,
Mayer, C.,
Van Gool, L.J.,
Timofte, R.,
Group Sparsity: The Hinge Between Filter Pruning and Decomposition
for Network Compression,
CVPR20(8015-8024)
IEEE DOI
2008
Matrix decomposition, Convolution, Tensile stress, Fasteners,
Matrix converters, Neural networks
BibRef
Wang, T.,
Wang, K.,
Cai, H.,
Lin, J.,
Liu, Z.,
Wang, H.,
Lin, Y.,
Han, S.,
APQ: Joint Search for Network Architecture, Pruning and Quantization
Policy,
CVPR20(2075-2084)
IEEE DOI
2008
Quantization (signal), Optimization, Training, Hardware, Pipelines,
Biological system modeling, Computer architecture
BibRef
Guo, S.,
Wang, Y.,
Li, Q.,
Yan, J.,
DMCP: Differentiable Markov Channel Pruning for Neural Networks,
CVPR20(1536-1544)
IEEE DOI
2008
Markov processes, Computer architecture, Training, Task analysis,
Mathematical model, Learning (artificial intelligence), Optimization
BibRef
Lin, M.,
Ji, R.,
Wang, Y.,
Zhang, Y.,
Zhang, B.,
Tian, Y.,
Shao, L.,
HRank: Filter Pruning Using High-Rank Feature Map,
CVPR20(1526-1535)
IEEE DOI
2008
Acceleration, Training, Hardware, Adaptive systems, Optimization,
Adaptation models, Neural networks
BibRef
Luo, J.,
Wu, J.,
Neural Network Pruning With Residual-Connections and Limited-Data,
CVPR20(1455-1464)
IEEE DOI
2008
Training, Computational modeling, Neural networks, Data models,
Image coding, Computer vision, Acceleration
BibRef
Wu, Y.,
Liu, C.,
Chen, B.,
Chien, S.,
Constraint-Aware Importance Estimation for Global Filter Pruning
under Multiple Resource Constraints,
EDLCV20(2935-2943)
IEEE DOI
2008
Estimation, Computational modeling, Training, Optimization,
Performance evaluation, Taylor series, Computer vision
BibRef
Gain, A.[Alex],
Kaushik, P.[Prakhar],
Siegelmann, H.[Hava],
Adaptive Neural Connections for Sparsity Learning,
WACV20(3177-3182)
IEEE DOI
2006
Training, Neurons, Bayes methods, Biological neural networks,
Computer architecture, Kernel, Computer science
BibRef
Ramakrishnan, R.K.,
Sari, E.,
Nia, V.P.,
Differentiable Mask for Pruning Convolutional and Recurrent Networks,
CRV20(222-229)
IEEE DOI
2006
BibRef
Blakeney, C.,
Yan, Y.,
Zong, Z.,
Is Pruning Compression?: Investigating Pruning Via Network Layer
Similarity,
WACV20(903-911)
IEEE DOI
2006
Biological neural networks, Neurons, Correlation,
Computational modeling, Training, Tools
BibRef
Verma, V.K.,
Singh, P.,
Namboodiri, V.P.,
Rai, P.,
A 'Network Pruning Network' Approach to Deep Model Compression,
WACV20(2998-3007)
IEEE DOI
2006
Computational modeling, Task analysis, Adaptation models,
Cost function, Computer architecture, Computer science, Iterative methods
BibRef
Ajanthan, T.,
Dokania, P.,
Hartley, R.,
Torr, P.H.S.,
Proximal Mean-Field for Neural Network Quantization,
ICCV19(4870-4879)
IEEE DOI
2004
computational complexity, gradient methods, neural nets,
optimisation, stochastic processes, proximal mean-field, Labeling
BibRef
Gao, S.,
Liu, X.,
Chien, L.,
Zhang, W.,
Alvarez, J.M.,
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep
Residual Networks,
CEFRL19(2980-2988)
IEEE DOI
2004
image filtering, neural nets, statistical analysis, CIFAR10,
first-order statistics, second-order statistics,
residual networks
BibRef
Liu, Z.,
Mu, H.,
Zhang, X.,
Guo, Z.,
Yang, X.,
Cheng, K.,
Sun, J.,
MetaPruning: Meta Learning for Automatic Neural Network Channel
Pruning,
ICCV19(3295-3304)
IEEE DOI
2004
Code, Neural Networks.
WWW Link. learning (artificial intelligence), neural nets,
sampling methods, stochastic processes, pruned networks,
Task analysis
BibRef
Zhou, Y.,
Zhang, Y.,
Wang, Y.,
Tian, Q.,
Accelerate CNN via Recursive Bayesian Pruning,
ICCV19(3305-3314)
IEEE DOI
2004
approximation theory, Bayes methods, computational complexity,
computer vision, convolutional neural nets, Markov processes,
Computational modeling
BibRef
Molchanov, P.[Pavlo],
Mallya, A.[Arun],
Tyree, S.[Stephen],
Frosio, I.[Iuri],
Kautz, J.[Jan],
Importance Estimation for Neural Network Pruning,
CVPR19(11256-11264).
IEEE DOI
2002
BibRef
Webster, R.[Ryan],
Rabin, J.[Julien],
Simon, L.[Loic],
Jurie, F.[Frederic],
Detecting Overfitting of Deep Generative Networks via Latent Recovery,
CVPR19(11265-11274).
IEEE DOI
2002
BibRef
Lemaire, C.[Carl],
Achkar, A.[Andrew],
Jodoin, P.M.[Pierre-Marc],
Structured Pruning of Neural Networks With Budget-Aware Regularization,
CVPR19(9100-9108).
IEEE DOI
2002
BibRef
Ding, X.O.[Xia-Ohan],
Ding, G.G.[Gui-Guang],
Guo, Y.C.[Yu-Chen],
Han, J.G.[Jun-Gong],
Centripetal SGD for Pruning Very Deep Convolutional Networks With
Complicated Structure,
CVPR19(4938-4948).
IEEE DOI
2002
BibRef
He, Y.[Yang],
Ding, Y.H.[Yu-Hang],
Liu, P.[Ping],
Zhu, L.C.[Lin-Chao],
Zhang, H.W.[Han-Wang],
Yang, Y.[Yi],
Learning Filter Pruning Criteria for Deep Convolutional Neural
Networks Acceleration,
CVPR20(2006-2015)
IEEE DOI
2008
Acceleration, Feature extraction, Training, Computer vision,
Convolutional neural networks, Benchmark testing, Computer architecture
BibRef
He, Y.[Yang],
Liu, P.[Ping],
Wang, Z.W.[Zi-Wei],
Hu, Z.L.[Zhi-Lan],
Yang, Y.[Yi],
Filter Pruning via Geometric Median for Deep Convolutional Neural
Networks Acceleration,
CVPR19(4335-4344).
IEEE DOI
2002
BibRef
Zhao, C.L.[Cheng-Long],
Ni, B.B.[Bing-Bing],
Zhang, J.[Jian],
Zhao, Q.[Qiwei],
Zhang, W.J.[Wen-Jun],
Tian, Q.[Qi],
Variational Convolutional Neural Network Pruning,
CVPR19(2775-2784).
IEEE DOI
2002
BibRef
Lin, S.H.[Shao-Hui],
Ji, R.R.[Rong-Rong],
Yan, C.Q.[Chen-Qian],
Zhang, B.C.[Bao-Chang],
Cao, L.J.[Liu-Juan],
Ye, Q.X.[Qi-Xiang],
Huang, F.Y.[Fei-Yue],
Doermann, D.[David],
Towards Optimal Structured CNN Pruning via Generative Adversarial
Learning,
CVPR19(2785-2794).
IEEE DOI
2002
BibRef
Mummadi, C.K.[Chaithanya Kumar],
Genewein, T.[Tim],
Zhang, D.[Dan],
Brox, T.[Thomas],
Fischer, V.[Volker],
Group Pruning Using a Bounded-Lp Norm for Group Gating and
Regularization,
GCPR19(139-155).
Springer DOI
1911
BibRef
Wang, W.T.[Wei-Ting],
Li, H.L.[Han-Lin],
Lin, W.S.[Wei-Shiang],
Chiang, C.M.[Cheng-Ming],
Tsai, Y.M.[Yi-Min],
Architecture-Aware Network Pruning for Vision Quality Applications,
ICIP19(2701-2705)
IEEE DOI
1910
Pruning, Vision Quality, Network Architecture
BibRef
Zhang, Y.X.[Yu-Xin],
Wang, H.A.[Hu-An],
Luo, Y.[Yang],
Yu, L.[Lu],
Hu, H.J.[Hao-Ji],
Shan, H.G.[Hang-Guan],
Quek, T.Q.S.[Tony Q. S.],
Three-Dimensional Convolutional Neural Network Pruning with
Regularization-Based Method,
ICIP19(4270-4274)
IEEE DOI
1910
3D CNN, video analysis, model compression, structured pruning, regularization
BibRef
Hu, Y.,
Sun, S.,
Li, J.,
Zhu, J.,
Wang, X.,
Gu, Q.,
Multi-Loss-Aware Channel Pruning of Deep Networks,
ICIP19(889-893)
IEEE DOI
1910
deep neural networks, object classification, model compression, channel pruning
BibRef
Manessi, F.,
Rozza, A.,
Bianco, S.,
Napoletano, P.,
Schettini, R.,
Automated Pruning for Deep Neural Network Compression,
ICPR18(657-664)
IEEE DOI
1812
Training, Neural networks, Quantization (signal), Task analysis,
Feature extraction, Pipelines, Image coding
BibRef
Yu, R.,
Li, A.,
Chen, C.,
Lai, J.,
Morariu, V.I.,
Han, X.,
Gao, M.,
Lin, C.,
Davis, L.S.,
NISP: Pruning Networks Using Neuron Importance Score Propagation,
CVPR18(9194-9203)
IEEE DOI
1812
Neurons, Redundancy, Optimization, Acceleration,
Biological neural networks, Task analysis, Feature extraction
BibRef
Zhang, T.[Tianyun],
Ye, S.[Shaokai],
Zhang, K.Q.[Kai-Qi],
Tang, J.[Jian],
Wen, W.[Wujie],
Fardad, M.[Makan],
Wang, Y.Z.[Yan-Zhi],
A Systematic DNN Weight Pruning Framework Using Alternating Direction
Method of Multipliers,
ECCV18(VIII: 191-207).
Springer DOI
1810
BibRef
Huang, Q.,
Zhou, K.,
You, S.,
Neumann, U.,
Learning to Prune Filters in Convolutional Neural Networks,
WACV18(709-718)
IEEE DOI
1806
computer vision, image segmentation,
learning (artificial intelligence), neural nets, CNN filters,
Training
BibRef
Carreira-Perpinan, M.A.,
Idelbayev, Y.,
'Learning-Compression' Algorithms for Neural Net Pruning,
CVPR18(8532-8541)
IEEE DOI
1812
Neural networks, Optimization, Training, Neurons,
Performance evaluation, Mobile handsets, Quantization (signal)
BibRef
Zhou, Z.,
Zhou, W.,
Li, H.,
Hong, R.,
Online Filter Clustering and Pruning for Efficient Convnets,
ICIP18(11-15)
IEEE DOI
1809
Training, Acceleration, Neural networks, Convolution, Tensile stress,
Force, Clustering algorithms, Deep neural networks, similar filter,
cluster loss
BibRef
Zhu, L.G.[Li-Geng],
Deng, R.Z.[Rui-Zhi],
Maire, M.[Michael],
Deng, Z.W.[Zhi-Wei],
Mori, G.[Greg],
Tan, P.[Ping],
Sparsely Aggregated Convolutional Networks,
ECCV18(XII: 192-208).
Springer DOI
1810
BibRef
Wang, Z.,
Zhu, C.,
Xia, Z.,
Guo, Q.,
Liu, Y.,
Towards thinner convolutional neural networks through gradually
global pruning,
ICIP17(3939-3943)
IEEE DOI
1803
Computational modeling, Machine learning, Measurement, Neurons,
Redundancy, Tensile stress, Training, Artificial neural networks,
Deep learning
BibRef
Luo, J.H.,
Wu, J.,
Lin, W.,
ThiNet:
A Filter Level Pruning Method for Deep Neural Network Compression,
ICCV17(5068-5076)
IEEE DOI
1802
data compression, image coding, image filtering,
inference mechanisms, neural nets, optimisation,
Training
BibRef
Rueda, F.M.[Fernando Moya],
Grzeszick, R.[Rene],
Fink, G.A.[Gernot A.],
Neuron Pruning for Compressing Deep Networks Using Maxout Architectures,
GCPR17(177-188).
Springer DOI
1711
BibRef
Yang, T.J.[Tien-Ju],
Chen, Y.H.[Yu-Hsin],
Sze, V.[Vivienne],
Designing Energy-Efficient Convolutional Neural Networks Using
Energy-Aware Pruning,
CVPR17(6071-6079)
IEEE DOI
1711
Computational modeling, Energy consumption, Estimation, Hardware,
Measurement, Memory management, Smart, phones
BibRef
Guo, J.[Jia],
Potkonjak, M.[Miodrag],
Pruning ConvNets Online for Efficient Specialist Models,
ECVW17(430-437)
IEEE DOI
1709
Biological neural networks, Computational modeling,
Computer vision, Convolution, Memory management, Sensitivity, analysis
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Compression .