Cao, Y.Q.[Yong-Qiang],
Chen, Y.[Yang],
Khosla, D.[Deepak],
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object
Recognition,
IJCV(113), No. 1, May 2015, pp. 54-66.
Springer DOI
1506
BibRef
Zhang, X.Y.[Xiang-Yu],
Zou, J.H.[Jian-Hua],
He, K.M.[Kai-Ming],
Sun, J.[Jian],
Accelerating Very Deep Convolutional Networks for Classification and
Detection,
PAMI(38), No. 10, October 2016, pp. 1943-1955.
IEEE DOI
1609
Acceleration
BibRef
He, Y.,
Zhang, X.Y.[Xiang-Yu],
Sun, J.[Jian],
Channel Pruning for Accelerating Very Deep Neural Networks,
ICCV17(1398-1406)
IEEE DOI
1802
iterative methods, learning (artificial intelligence),
least squares approximations, neural nets, regression analysis,
Training
BibRef
Zhang, X.Y.[Xiang-Yu],
Zou, J.H.[Jian-Hua],
Ming, X.[Xiang],
He, K.M.[Kai-Ming],
Sun, J.[Jian],
Efficient and accurate approximations of nonlinear convolutional
networks,
CVPR15(1984-1992)
IEEE DOI
1510
BibRef
He, K.M.[Kai-Ming],
Zhang, X.Y.[Xiang-Yu],
Ren, S.Q.[Shao-Qing],
Sun, J.[Jian],
Delving Deep into Rectifiers:
Surpassing Human-Level Performance on ImageNet Classification,
ICCV15(1026-1034)
IEEE DOI
1602
Adaptation models
BibRef
Sze, V.,
Chen, Y.H.,
Yang, T.J.,
Emer, J.S.,
Efficient Processing of Deep Neural Networks: A Tutorial and Survey,
PIEEE(105), No. 12, December 2017, pp. 2295-2329.
IEEE DOI
1712
Survey, Deep Neural Networks. Artificial intelligence, Benchmark testing,
Biological neural networks, Computer architecture,
spatial architectures
BibRef
Cavigelli, L.,
Benini, L.,
Origami: A 803-GOp/s/W Convolutional Network Accelerator,
CirSysVideo(27), No. 11, November 2017, pp. 2461-2475.
IEEE DOI
1712
Computer architecture, Computer vision, Feature extraction,
Machine learning, Mobile communication, Neural networks,
very large scale integration
BibRef
Ghesu, F.C.[Florin C.],
Krubasik, E.,
Georgescu, B.,
Singh, V.,
Zheng, Y.,
Hornegger, J.[Joachim],
Comaniciu, D.,
Marginal Space Deep Learning: Efficient Architecture for Volumetric
Image Parsing,
MedImg(35), No. 5, May 2016, pp. 1217-1228.
IEEE DOI
1605
Context
BibRef
Revathi, A.R.,
Kumar, D.[Dhananjay],
An efficient system for anomaly detection using deep learning
classifier,
SIViP(11), No. 2, February 2017, pp. 291-299.
WWW Link.
1702
BibRef
Sun, B.,
Feng, H.,
Efficient Compressed Sensing for Wireless Neural Recording:
A Deep Learning Approach,
SPLetters(24), No. 6, June 2017, pp. 863-867.
IEEE DOI
1705
Compressed sensing, Cost function, Dictionaries, Sensors, Training,
Wireless communication, Wireless sensor networks,
Compressed sensing (CS), deep neural network, wireless neural recording
BibRef
Xu, T.B.[Ting-Bing],
Yang, P.[Peipei],
Zhang, X.Y.[Xu-Yao],
Liu, C.L.[Cheng-Lin],
LightweightNet: Toward fast and lightweight convolutional neural
networks via architecture distillation,
PR(88), 2019, pp. 272-284.
Elsevier DOI
1901
Deep network acceleration and compression,
Architecture distillation, Lightweight network
BibRef
Feng, J.[Jie],
Wang, L.[Lin],
Yu, H.[Haipeng],
Jiao, L.C.[Li-Cheng],
Zhang, X.R.[Xiang-Rong],
Divide-and-Conquer Dual-Architecture Convolutional Neural Network for
Classification of Hyperspectral Images,
RS(11), No. 5, 2019, pp. xx-yy.
DOI Link
1903
BibRef
Kim, D.H.[Dae Ha],
Lee, M.K.[Min Kyu],
Lee, S.H.[Seung Hyun],
Song, B.C.[Byung Cheol],
Macro unit-based convolutional neural network for very light-weight
deep learning,
IVC(87), 2019, pp. 68-75.
Elsevier DOI
1906
BibRef
Earlier: A1, A3, A4, Only:
MUNet: Macro Unit-Based Convolutional Neural Network for Mobile
Devices,
EfficientDeep18(1749-17498)
IEEE DOI
1812
Deep neural networks, Light-weight deep learning, Macro-unit.
Convolution, Computational complexity,
Mobile handsets, Neural networks, Performance evaluation
BibRef
Zhang, C.Y.[Chun-Yang],
Zhao, Q.[Qi],
Chen, C.L.P.[C.L. Philip],
Liu, W.X.[Wen-Xi],
Deep compression of probabilistic graphical networks,
PR(96), 2019, pp. 106979.
Elsevier DOI
1909
Deep compression, Probabilistic graphical models,
Probabilistic graphical networks, Deep learning
BibRef
Brillet, L.F.,
Mancini, S.,
Cleyet-Merle, S.,
Nicolas, M.,
Tunable CNN Compression Through Dimensionality Reduction,
ICIP19(3851-3855)
IEEE DOI
1910
CNN, PCA, compression
BibRef
Dong, Y.P.[Yin-Peng],
Ni, R.K.[Ren-Kun],
Li, J.G.[Jian-Guo],
Chen, Y.R.[Yu-Rong],
Su, H.[Hang],
Zhu, J.[Jun],
Stochastic Quantization for Learning Accurate Low-Bit Deep Neural
Networks,
IJCV(127), No. 11-12, December 2019, pp. 1629-1642.
Springer DOI
1911
BibRef
Zhou, A.[Aojun],
Yao, A.B.[An-Bang],
Wang, K.[Kuan],
Chen, Y.R.[Yu-Rong],
Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural
Networks,
CVPR18(9426-9435)
IEEE DOI
1812
Computer vision, Pattern recognition
BibRef
Lin, S.H.[Shao-Hui],
Ji, R.R.[Rong-Rong],
Chen, C.[Chao],
Tao, D.C.[Da-Cheng],
Luo, J.B.[Jie-Bo],
Holistic CNN Compression via Low-Rank Decomposition with Knowledge
Transfer,
PAMI(41), No. 12, December 2019, pp. 2889-2905.
IEEE DOI
1911
Knowledge transfer, Image coding, Task analysis,
Information exchange, Computational modeling, CNN acceleration
BibRef
Zhang, L.[Lin],
Bu, X.K.[Xiao-Kang],
Li, B.[Bing],
XNORCONV: CNNs accelerator implemented on FPGA using a hybrid CNNs
structure and an inter-layer pipeline method,
IET-IPR(14), No. 1, January 2020, pp. 105-113.
DOI Link
1912
BibRef
Chen, Z.,
Fan, K.,
Wang, S.,
Duan, L.,
Lin, W.,
Kot, A.C.,
Toward Intelligent Sensing: Intermediate Deep Feature Compression,
IP(29), 2020, pp. 2230-2243.
IEEE DOI
2001
Visualization, Image coding, Task analysis, Feature extraction,
Deep learning, Video coding, Standardization, Deep learning,
feature compression
BibRef
Lobel, H.[Hans],
Vidal, R.[René],
Soto, A.[Alvaro],
CompactNets: Compact Hierarchical Compositional Networks for Visual
Recognition,
CVIU(191), 2020, pp. 102841.
Elsevier DOI
2002
Deep learning, Regularization, Group sparsity, Image categorization
BibRef
Ding, L.[Lin],
Tian, Y.H.[Yong-Hong],
Fan, H.F.[Hong-Fei],
Chen, C.H.[Chang-Huai],
Huang, T.J.[Tie-Jun],
Joint Coding of Local and Global Deep Features in Videos for Visual
Search,
IP(29), 2020, pp. 3734-3749.
IEEE DOI
2002
Local deep feature, joint coding, visual search, inter-feature correlation
BibRef
Browne, D.[David],
Giering, M.[Michael],
Prestwich, S.[Steven],
PulseNetOne: Fast Unsupervised Pruning of Convolutional Neural
Networks for Remote Sensing,
RS(12), No. 7, 2020, pp. xx-yy.
DOI Link
2004
BibRef
Liu, Z.C.[Ze-Chun],
Luo, W.H.[Wen-Han],
Wu, B.Y.[Bao-Yuan],
Yang, X.[Xin],
Liu, W.[Wei],
Cheng, K.T.[Kwang-Ting],
Bi-Real Net: Binarizing Deep Network Towards Real-Network Performance,
IJCV(128), No. 1, January 2020, pp. 202-219.
Springer DOI
2002
BibRef
Earlier: A1, A3, A2, A4, A5, A6:
Bi-Real Net: Enhancing the Performance of 1-Bit CNNs with Improved
Representational Capability and Advanced Training Algorithm,
ECCV18(XV: 747-763).
Springer DOI
1810
BibRef
Cavigelli, L.,
Benini, L.,
CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional
Network Inference on Video Streams,
CirSysVideo(30), No. 5, May 2020, pp. 1451-1465.
IEEE DOI
2005
Learning with video.
Feature extraction, Object detection, Throughput, Convolution,
Inference algorithms, Semantics, Approximation algorithms,
object detection
BibRef
Sun, F.Z.[Feng-Zhen],
Li, S.J.[Shao-Jie],
Wang, S.H.[Shao-Hua],
Liu, Q.J.[Qing-Jun],
Zhou, L.X.[Li-Xin],
CostNet: A Concise Overpass Spatiotemporal Network for Predictive
Learning,
IJGI(9), No. 4, 2020, pp. xx-yy.
DOI Link
2005
ResNet to deal with temporal.
BibRef
Kalayeh, M.M.[Mahdi M.],
Shah, M.[Mubarak],
Training Faster by Separating Modes of Variation in Batch-Normalized
Models,
PAMI(42), No. 6, June 2020, pp. 1483-1500.
IEEE DOI
2005
Training, Kernel, Mathematical model, Transforms,
Probability density function, Statistics, Acceleration, fisher vector
BibRef
Ma, W.C.[Wen-Chi],
Wu, Y.W.[Yuan-Wei],
Cen, F.[Feng],
Wang, G.H.[Guang-Hui],
MDFN: Multi-scale deep feature learning network for object detection,
PR(100), 2020, pp. 107149.
Elsevier DOI
2005
Deep feature learning, Multi-scale,
Semantic and contextual information, Small and occluded objects
BibRef
Ma, W.[Wenchi],
Wu, Y.[Yuanwei],
Wang, Z.,
Wang, G.H.[Guang-Hui],
MDCN: Multi-Scale, Deep Inception Convolutional Neural Networks for
Efficient Object Detection,
ICPR18(2510-2515)
IEEE DOI
1812
Feature extraction, Object detection, Computational modeling,
Task analysis, Convolutional neural networks, Hardware, Real-time systems
BibRef
Hu, J.[Jie],
Shen, L.[Li],
Albanie, S.[Samuel],
Sun, G.[Gang],
Wu, E.[Enhua],
Squeeze-and-Excitation Networks,
PAMI(42), No. 8, August 2020, pp. 2011-2023.
IEEE DOI
2007
Computer architecture, Computational modeling, Convolution,
Task analysis, Correlation, Optimization,
convolutional neural networks
BibRef
Lelekas, I.,
Tomen, N.,
Pintea, S.L.,
van Gemert, J.C.,
Top-Down Networks: A coarse-to-fine reimagination of CNNs,
DeepVision20(3244-3253)
IEEE DOI
2008
Feature extraction, Spatial resolution, Computer architecture,
Merging, Task analysis, Visualization, Robustness
BibRef
Ma, L.H.[Long-Hua],
Fan, H.Y.[Hang-Yu],
Lu, Z.M.[Zhe-Ming],
Tian, D.[Dong],
Acceleration of multi-task cascaded convolutional networks,
IET-IPR(14), No. 11, September 2020, pp. 2435-2441.
DOI Link
2009
BibRef
Jiang, Y.G.[Yu-Gang],
Cheng, C.M.[Chang-Mao],
Lin, H.Y.[Hang-Yu],
Fu, Y.W.[Yan-Wei],
Learning Layer-Skippable Inference Network,
IP(29), 2020, pp. 8747-8759.
IEEE DOI
2009
Task analysis, Visualization, Computational modeling,
Biological information theory, Computational efficiency, Neurons,
neural networks
BibRef
Fang, Z.Y.[Zhen-Yu],
Ren, J.C.[Jin-Chang],
Marshall, S.[Stephen],
Zhao, H.M.[Hui-Min],
Wang, S.[Song],
Li, X.L.[Xue-Long],
Topological optimization of the DenseNet with pretrained-weights
inheritance and genetic channel selection,
PR(109), 2021, pp. 107608.
Elsevier DOI
2009
Deep convolutional neural networks, Genetic algorithms,
Parameter reduction, Structure optimization, DenseNet
BibRef
Li, G.Q.[Guo-Qing],
Zhang, M.[Meng],
Li, J.[Jiaojie],
Lv, F.[Feng],
Tong, G.D.[Guo-Dong],
Efficient densely connected convolutional neural networks,
PR(109), 2021, pp. 107610.
Elsevier DOI
2009
Convolutional neural networks, Classification,
Parameter efficiency, Densely connected
BibRef
Yang, Y.Q.[Yong-Quan],
Lv, H.J.[Hai-Jun],
Chen, N.[Ning],
Wu, Y.[Yang],
Zheng, J.[Jiayi],
Zheng, Z.X.[Zhong-Xi],
Local minima found in the subparameter space can be effective for
ensembles of deep convolutional neural networks,
PR(109), 2021, pp. 107582.
Elsevier DOI
2009
Ensemble learning, Ensemble selection, Ensemble fusion,
Deep convolutional neural network
BibRef
Gürhanli, A.[Ahmet],
Accelerating convolutional neural network training using ProMoD
backpropagation algorithm,
IET-IPR(14), No. 13, November 2020, pp. 2957-2964.
DOI Link
2012
BibRef
Zhang, H.R.[Hao-Ran],
Hu, Z.Z.[Zhen-Zhen],
Qin, W.[Wei],
Xu, M.L.[Ming-Liang],
Wang, M.[Meng],
Adversarial co-distillation learning for image recognition,
PR(111), 2021, pp. 107659.
Elsevier DOI
2012
Knowledge distillation, Data augmentation,
Generative adversarial nets, Divergent examples, Image classification
BibRef
Zhou, Z.G.[Zheng-Guang],
Zhou, W.G.[Wen-Gang],
Lv, X.T.[Xu-Tao],
Huang, X.[Xuan],
Wang, X.Y.[Xiao-Yu],
Li, H.Q.[Hou-Qiang],
Progressive Learning of Low-Precision Networks for Image
Classification,
MultMed(23), 2021, pp. 871-882.
IEEE DOI
2103
Quantization (signal), Training, Neural networks, Convolution,
Acceleration, Task analysis, Complexity theory,
image classification
BibRef
Xi, J.B.[Jiang-Bo],
Ersoy, O.K.[Okan K.],
Fang, J.[Jianwu],
Cong, M.[Ming],
Wu, T.J.[Tian-Jun],
Zhao, C.Y.[Chao-Ying],
Li, Z.H.[Zhen-Hong],
Wide Sliding Window and Subsampling Network for Hyperspectral Image
Classification,
RS(13), No. 7, 2021, pp. xx-yy.
DOI Link
2104
BibRef
Berthelier, A.[Anthony],
Yan, Y.Z.[Yong-Zhe],
Chateau, T.[Thierry],
Blanc, C.[Christophe],
Duffner, S.[Stefan],
Garcia, C.[Christophe],
Learning Sparse Filters in Deep Convolutional Neural Networks with a
L1/l2 Pseudo-norm,
CADL20(662-676).
Springer DOI
2103
BibRef
Kathariya, B.[Birendra],
Li, L.[Li],
Li, Z.[Zhu],
Duan, L.Y.[Ling-Yu],
Liu, S.[Shan],
Network Update Compression for Federated Learning,
VCIP20(38-41)
IEEE DOI
2102
Servers, Data models, Collaborative work, Uplink, Urban areas,
Training, Matrix decomposition, federated learning,
Karhunen-Ločve Transform (KLT)
BibRef
Zhang, L.,
Two recent advances on normalization methods for deep neural network
optimization,
VCIP20(1-1)
IEEE DOI
2102
Training, Optimization, Neural networks, Standardization,
Pattern recognition, Pattern analysis, Imaging
BibRef
Du, K.Y.[Kun-Yuan],
Zhang, Y.[Ya],
Guan, H.B.[Hai-Bing],
Tian, Q.[Qi],
Wang, Y.F.[Yan-Feng],
Cheng, S.G.[Sheng-Gan],
Lin, J.[James],
FTL: A Universal Framework for Training Low-bit DNNs via Feature
Transfer,
ECCV20(XXV:700-716).
Springer DOI
2011
BibRef
Xu, K.R.[Kun-Ran],
Rui, L.[Lai],
Li, Y.S.[Yi-Shi],
Gu, L.[Lin],
Feature Normalized Knowledge Distillation for Image Classification,
ECCV20(XXV:664-680).
Springer DOI
2011
BibRef
Jiang, Z.X.[Zi-Xuan],
Zhu, K.[Keren],
Liu, M.J.[Ming-Jie],
Gu, J.Q.[Jia-Qi],
Pan, D.Z.[David Z.],
An Efficient Training Framework for Reversible Neural Architectures,
ECCV20(XXVII:275-289).
Springer DOI
2011
Trade memory requirements for computation.
BibRef
Herrmann, C.[Charles],
Bowen, R.S.[Richard Strong],
Zabih, R.[Ramin],
Channel Selection Using Gumbel Softmax,
ECCV20(XXVII:241-257).
Springer DOI
2011
Executing some layers, pruning, etc.
BibRef
Isikdogan, L.F.[Leo F.],
Nayak, B.V.[Bhavin V.],
Wu, C.T.[Chyuan-Tyng],
Moreira, J.P.[Joao Peralta],
Rao, S.[Sushma],
Michael, G.[Gilad],
SemifreddoNets: Partially Frozen Neural Networks for Efficient Computer
Vision Systems,
ECCV20(XXVII:193-208).
Springer DOI
2011
Partial frozen weights. Only change some weights in learning.
BibRef
Xie, X.,
Zhou, Y.,
Kung, S.Y.,
Exploring Highly Efficient Compact Neural Networks For Image
Classification,
ICIP20(2930-2934)
IEEE DOI
2011
Convolution, Standards, Neural networks, Computational efficiency,
Task analysis, Computer architecture, Fuses, Lightweight network,
inter-group information exchange
BibRef
Ahn, S.,
Chang, J.W.,
Kang, S.J.,
An Efficient Accelerator Design Methodology For Deformable
Convolutional Networks,
ICIP20(3075-3079)
IEEE DOI
2011
IP networks, Erbium, Zirconium, Indexes, Hardware accelerator,
deformable convolution, system architecture, FPGA, deep learning
BibRef
Kehrenberg, T.[Thomas],
Bartlett, M.[Myles],
Thomas, O.[Oliver],
Quadrianto, N.[Novi],
Null-sampling for Interpretable and Fair Representations,
ECCV20(XXVI:565-580).
Springer DOI
2011
Code, CNN.
WWW Link.
BibRef
Malkin, N.[Nikolay],
Ortiz, A.[Anthony],
Jojic, N.[Nebojsa],
Mining Self-similarity: Label Super-resolution with Epitomic
Representations,
ECCV20(XXVI:531-547).
Springer DOI
2011
Learn from very large data-sets.
BibRef
Liu, Z.G.[Zhi-Gang],
Mattina, M.[Matthew],
Efficient Residue Number System Based Winograd Convolution,
ECCV20(XIX:53-68).
Springer DOI
2011
BibRef
Park, E.[Eunhyeok],
Yoo, S.J.[Sung-Joo],
Profit: A Novel Training Method for sub-4-bit Mobilenet Models,
ECCV20(VI:430-446).
Springer DOI
2011
BibRef
Han, B.[Bing],
Roy, K.[Kaushik],
Deep Spiking Neural Network: Energy Efficiency Through Time Based
Coding,
ECCV20(X:388-404).
Springer DOI
2011
BibRef
Shomron, G.[Gil],
Banner, R.[Ron],
Shkolnik, M.[Moran],
Weiser, U.[Uri],
Thanks for Nothing: Predicting Zero-valued Activations with Lightweight
Convolutional Neural Networks,
ECCV20(X:234-250).
Springer DOI
2011
BibRef
Su, Z.[Zhuo],
Fang, L.P.[Lin-Pu],
Kang, W.X.[Wen-Xiong],
Hu, D.[Dewen],
Pietikäinen, M.[Matti],
Liu, L.[Li],
Dynamic Group Convolution for Accelerating Convolutional Neural
Networks,
ECCV20(VI:138-155).
Springer DOI
2011
BibRef
Yu, H.B.[Hai-Bao],
Han, Q.[Qi],
Li, J.B.[Jian-Bo],
Shi, J.P.[Jian-Ping],
Cheng, G.L.[Guang-Liang],
Fan, B.[Bin],
Search What You Want:
Barrier Penalty NAS for Mixed Precision Quantization,
ECCV20(IX:1-16).
Springer DOI
2011
BibRef
Xie, Z.D.[Zhen-Da],
Zhang, Z.[Zheng],
Zhu, X.[Xizhou],
Huang, G.[Gao],
Lin, S.[Stephen],
Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation,
ECCV20(I:531-548).
Springer DOI
2011
Reduci superfluous computation in feature maps of CNNs.
BibRef
Phan, A.H.[Anh-Huy],
Sobolev, K.[Konstantin],
Sozykin, K.[Konstantin],
Ermilov, D.[Dmitry],
Gusak, J.[Julia],
Tichavský, P.[Petr],
Glukhov, V.[Valeriy],
Oseledets, I.[Ivan],
Cichocki, A.[Andrzej],
Stable Low-rank Tensor Decomposition for Compression of Convolutional
Neural Network,
ECCV20(XXIX: 522-539).
Springer DOI
2010
BibRef
Yong, H.W.[Hong-Wei],
Huang, J.Q.[Jian-Qiang],
Hua, X.S.[Xian-Sheng],
Zhang, L.[Lei],
Gradient Centralization: A New Optimization Technique for Deep Neural
Networks,
ECCV20(I:635-652).
Springer DOI
2011
BibRef
Yuan, Z.N.[Zhuo-Ning],
Guo, Z.S.[Zhi-Shuai],
Yu, X.T.[Xiao-Tian],
Wang, X.Y.[Xiao-Yu],
Yang, T.B.[Tian-Bao],
Accelerating Deep Learning with Millions of Classes,
ECCV20(XXIII:711-726).
Springer DOI
2011
BibRef
Yang, T.J.[Tao-Jiannan],
Zhu, S.J.[Si-Jie],
Chen, C.[Chen],
Yan, S.[Shen],
Zhang, M.[Mi],
Willis, A.[Andrew],
MutualNet: Adaptive Convnet via Mutual Learning from Network Width and
Resolution,
ECCV20(I:299-315).
Springer DOI
2011
Code, ConvNet.
WWW Link. Executable with dynamic resources.
BibRef
Marban, A.[Arturo],
Becking, D.[Daniel],
Wiedemann, S.[Simon],
Samek, W.[Wojciech],
Learning Sparse Ternary Neural Networks with Entropy-Constrained
Trained Ternarization (EC2T),
EDLCV20(3105-3113)
IEEE DOI
2008
Neural networks, Quantization (signal), Mathematical model,
Computational modeling, Compounds, Entropy, Histograms
BibRef
Langroudi, H.F.[Hamed F.],
Karia, V.[Vedant],
Gustafson, J.L.[John L.],
Kudithipudi, D.[Dhireesha],
Adaptive Posit: Parameter aware numerical format for deep learning
inference on the edge,
EDLCV20(3123-3131)
IEEE DOI
2008
Dynamic range, Neural networks, Quantization (signal),
Computational modeling, Machine learning, Adaptation models, Numerical models
BibRef
Vu, T.[Thanh],
Eder, M.[Marc],
Price, T.[True],
Frahm, J.M.[Jan-Michael],
Any-Width Networks,
EDLCV20(3018-3026)
IEEE DOI
2008
Adjust width as needed.
Training, Switches, Computer architecture, Standards, Convolution,
Inference algorithms
BibRef
Elsen, E.,
Dukhan, M.,
Gale, T.,
Simonyan, K.,
Fast Sparse ConvNets,
CVPR20(14617-14626)
IEEE DOI
2008
Kernel, Computer architecture, Sparse matrices, Neural networks,
Standards, Computational modeling, Acceleration
BibRef
Pad, P.[Pedram],
Narduzzi, S.[Simon],
Kündig, C.[Clément],
Türetken, E.[Engin],
Bigdeli, S.A.[Siavash A.],
Dunbar, L.A.[L. Andrea],
Efficient Neural Vision Systems Based on Convolutional Image
Acquisition,
CVPR20(12282-12291)
IEEE DOI
2008
Optical imaging, Convolution, Optical sensors, Kernel,
Optical computing, Optical network units, Optical filters
BibRef
Song, G.L.[Guang-Lu],
Liu, Y.[Yu],
Wang, X.G.[Xiao-Gang],
Revisiting the Sibling Head in Object Detector,
CVPR20(11560-11569)
IEEE DOI
2008
in R-CNN.
Task analysis, Proposals, Detectors, Feature extraction, Training,
Google, Sensitivity
BibRef
Wang, Q.L.[Qi-Long],
Wu, B.G.[Bang-Gu],
Zhu, P.F.[Peng-Fei],
Li, P.H.[Pei-Hua],
Zuo, W.M.[Wang-Meng],
Hu, Q.H.[Qing-Hua],
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural
Networks,
CVPR20(11531-11539)
IEEE DOI
2008
Convolution, Complexity theory, Dimensionality reduction, Kernel,
Adaptation models, Computational modeling, Convolutional neural networks
BibRef
Chen, Y.P.[Yin-Peng],
Dai, X.Y.[Xi-Yang],
Liu, M.C.[Meng-Chen],
Chen, D.D.[Dong-Dong],
Yuan, L.[Lu],
Liu, Z.C.[Zi-Cheng],
Dynamic Convolution: Attention Over Convolution Kernels,
CVPR20(11027-11036)
IEEE DOI
2008
Expand as needed.
Convolution, Kernel, Computer architecture, Neural networks,
Computational efficiency, Computational modeling, Training
BibRef
Xie, Q.Z.[Qi-Zhe],
Luong, M.T.[Minh-Thang],
Hovy, E.[Eduard],
Le, Q.V.[Quoc V.],
Self-Training With Noisy Student Improves ImageNet Classification,
CVPR20(10684-10695)
IEEE DOI
2008
Noise measurement, Training, Stochastic processes, Robustness,
Entropy, Data models, Image resolution
BibRef
Verelst, T.[Thomas],
Tuytelaars, T.[Tinne],
Dynamic Convolutions: Exploiting Spatial Sparsity for Faster
Inference,
CVPR20(2317-2326)
IEEE DOI
2008
Graphics processing units, Task analysis, Neural networks,
Computer architecture, Tensile stress, Complexity theory, Image coding
BibRef
Goli, N.,
Aamodt, T.M.,
ReSprop: Reuse Sparsified Backpropagation,
CVPR20(1545-1555)
IEEE DOI
2008
Training, Convolution, Acceleration, Convolutional neural networks,
Hardware, Convergence, Correlation
BibRef
Idelbayev, Y.,
Carreira-Perpińán, M.Á.,
Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer,
CVPR20(8046-8056)
IEEE DOI
2008
Neural networks, Training, Cost function, Tensile stress,
Image coding, Matrix decomposition
BibRef
Haroush, M.,
Hubara, I.,
Hoffer, E.,
Soudry, D.,
The Knowledge Within: Methods for Data-Free Model Compression,
CVPR20(8491-8499)
IEEE DOI
2008
Training, Data models, Optimization, Computational modeling,
Calibration, Training data, Degradation
BibRef
Mordido, G.,
van Keirsbilck, M.,
Keller, A.,
Monte Carlo Gradient Quantization,
EDLCV20(3087-3095)
IEEE DOI
2008
Training, Quantization (signal), Monte Carlo methods, Convergence,
Neural networks, Heuristic algorithms, Image coding
BibRef
Wiedemann, S.,
Mehari, T.,
Kepp, K.,
Samek, W.,
Dithered backprop: A sparse and quantized backpropagation algorithm
for more efficient deep neural network training,
EDLCV20(3096-3104)
IEEE DOI
2008
Quantization (signal), Training, Mathematical model, Standards,
Neural networks, Convergence, Computational efficiency
BibRef
Rajagopal, A.,
Bouganis, C.,
Now that I can see, I can improve: Enabling data-driven finetuning of
CNNs on the edge,
EDLCV20(3058-3067)
IEEE DOI
2008
Adaptation models, Data models, Performance evaluation, Training,
Computational modeling, Topology
BibRef
Jiang, W.,
Wang, W.,
Liu, S.,
Structured Weight Unification and Encoding for Neural Network
Compression and Acceleration,
EDLCV20(3068-3076)
IEEE DOI
2008
Quantization (signal), Computational modeling, Encoding,
Image coding, Training, Acceleration, Predictive models
BibRef
Chatzikonstantinou, C.,
Papadopoulos, G.T.,
Dimitropoulos, K.,
Daras, P.,
Neural Network Compression Using Higher-Order Statistics and
Auxiliary Reconstruction Losses,
EDLCV20(3077-3086)
IEEE DOI
2008
Gaussian distribution, Training, Higher order statistics,
Measurement, Neural networks, Machine learning, Computational complexity
BibRef
Yang, H.,
Gui, S.,
Zhu, Y.,
Liu, J.,
Automatic Neural Network Compression by Sparsity-Quantization Joint
Learning: A Constrained Optimization-Based Approach,
CVPR20(2175-2185)
IEEE DOI
2008
Quantization (signal), Optimization, Computational modeling,
Tensile stress, Search problems, Neural networks, Image coding
BibRef
Saini, R.,
Jha, N.K.,
Das, B.,
Mittal, S.,
Mohan, C.K.,
ULSAM: Ultra-Lightweight Subspace Attention Module for Compact
Convolutional Neural Networks,
WACV20(1616-1625)
IEEE DOI
2006
Convolution, Computational modeling, Task analysis,
Computational efficiency, Feature extraction, Redundancy, Head
BibRef
Suau, X.,
Zappella, u.,
Apostoloff, N.,
Filter Distillation for Network Compression,
WACV20(3129-3138)
IEEE DOI
2006
Correlation, Training, Tensile stress,
Eigenvalues and eigenfunctions, Image coding, Decorrelation,
Principal component analysis
BibRef
Wang, M.,
Cai, H.,
Huang, X.,
Gong, M.,
ADNet: Adaptively Dense Convolutional Neural Networks,
WACV20(990-999)
IEEE DOI
2006
Adaptation models, Training, Convolution, Task analysis,
Computer architecture, Convolutional neural networks, Computational efficiency
BibRef
Oyedotun, O.K.,
Aouada, D.,
Ottersten, B.,
Structured Compression of Deep Neural Networks with Debiased Elastic
Group LASSO,
WACV20(2266-2275)
IEEE DOI
2006
Computational modeling, Feature extraction, Training,
Cost function, Training data, Task analysis, Neural networks
BibRef
Hsu, L.,
Chiu, C.,
Lin, K.,
An Energy-Aware Bit-Serial Streaming Deep Convolutional Neural
Network Accelerator,
ICIP19(4609-4613)
IEEE DOI
1910
CNNs, Hardware Accelerator, EnergyAware, Precision, Bit-Serial PE,
Streaming Dataflow
BibRef
Lu, J.[Jing],
Xu, C.F.[Chao-Fan],
Zhang, W.[Wei],
Duan, L.Y.[Ling-Yu],
Mei, T.[Tao],
Sampling Wisely: Deep Image Embedding by Top-K Precision Optimization,
ICCV19(7960-7969)
IEEE DOI
2004
convolutional neural nets, gradient methods, image processing,
learning (artificial intelligence),
Toy manufacturing industry
BibRef
Nascimento, M.G.D.,
Prisacariu, V.,
Fawcett, R.,
DSConv: Efficient Convolution Operator,
ICCV19(5147-5156)
IEEE DOI
2004
convolutional neural nets, neural net architecture,
statistical distributions, DSConv,
Training data
BibRef
Gu, J.,
Zhao, J.,
Jiang, X.,
Zhang, B.,
Liu, J.,
Guo, G.,
Ji, R.,
Bayesian Optimized 1-Bit CNNs,
ICCV19(4908-4916)
IEEE DOI
2004
Bayes methods, convolutional neural nets,
feature extraction, image classification, Indexes
BibRef
Chao, P.,
Kao, C.,
Ruan, Y.,
Huang, C.,
Lin, Y.,
HarDNet: A Low Memory Traffic Network,
ICCV19(3551-3560)
IEEE DOI
2004
feature extraction, image segmentation, neural nets,
object detection, neural network architectures, MACs,
Power demand
BibRef
Chen, Y.,
Fan, H.,
Xu, B.,
Yan, Z.,
Kalantidis, Y.,
Rohrbach, M.,
Shuicheng, Y.,
Feng, J.,
Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural
Networks With Octave Convolution,
ICCV19(3434-3443)
IEEE DOI
2004
convolutional neural nets, feature extraction,
image classification, image resolution, neural net architecture, Kernel
BibRef
Phuong, M.[Mary],
Lampert, C.[Christoph],
Distillation-Based Training for Multi-Exit Architectures,
ICCV19(1355-1364)
IEEE DOI
2004
To terminate proessing early.
convolutional neural nets, image classification, probability,
supervised learning, training procedure, multiexit architectures,
BibRef
Chen, Y.,
Liu, S.,
Shen, X.,
Jia, J.,
Fast Point R-CNN,
ICCV19(9774-9783)
IEEE DOI
2004
convolutional neural nets, feature extraction,
image representation, object detection, solid modelling,
Detectors
BibRef
Gkioxari, G.,
Johnson, J.,
Malik, J.,
Mesh R-CNN,
ICCV19(9784-9794)
IEEE DOI
2004
computational geometry,
convolutional neural nets, feature extraction, graph theory,
Benchmark testing
BibRef
Vooturi, D.T.[Dharma Teja],
Varma, G.[Girish],
Kothapalli, K.[Kishore],
Dynamic Block Sparse Reparameterization of Convolutional Neural
Networks,
CEFRL19(3046-3053)
IEEE DOI
2004
Code, Convolutional Networks.
WWW Link. convolutional neural nets, image classification,
learning (artificial intelligence), dense neural networks, neural networks
BibRef
Dong, Z.,
Yao, Z.,
Gholami, A.,
Mahoney, M.,
Keutzer, K.,
HAWQ: Hessian AWare Quantization of Neural Networks With
Mixed-Precision,
ICCV19(293-302)
IEEE DOI
2004
image resolution, neural nets, quantisation (signal),
neural networks, mixed-precision quantization, deep networks,
Image resolution
BibRef
Gusak, J.,
Kholiavchenko, M.,
Ponomarev, E.,
Markeeva, L.,
Blagoveschensky, P.,
Cichocki, A.,
Oseledets, I.,
Automated Multi-Stage Compression of Neural Networks,
LPCV19(2501-2508)
IEEE DOI
2004
approximation theory, iterative methods, matrix decomposition,
neural nets, tensors, noniterative ones,
automated
BibRef
Yan, M.,
Zhao, M.,
Xu, Z.,
Zhang, Q.,
Wang, G.,
Su, Z.,
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network
for Lightweight Face Recognition,
LFR19(2647-2654)
IEEE DOI
2004
Code, Face Recognition.
WWW Link. convolutional neural nets, face recognition,
learning (artificial intelligence), student model, teacher model,
knowledge distillation
BibRef
Hascoet, T.,
Febvre, Q.,
Zhuang, W.,
Ariki, Y.,
Takiguchi, T.,
Layer-Wise Invertibility for Extreme Memory Cost Reduction of CNN
Training,
NeruArch19(2049-2052)
IEEE DOI
2004
backpropagation, computer vision, convolutional neural nets,
graphics processing units, minimal training memory consumption,
invertible transformations
BibRef
Ghosh, R.,
Gupta, A.K.,
Motani, M.,
Investigating Convolutional Neural Networks using Spatial Orderness,
NeruArch19(2053-2056)
IEEE DOI
2004
convolutional neural nets, image classification,
statistical analysis, convolutional neural networks, CNN,
Opening the black box of CNNs
BibRef
Cruz Vargas, J.A.[Jesus Adan],
Zamora Esquivel, J.[Julio],
Tickoo, O.[Omesh],
Introducing Region Pooling Learning,
CADL20(714-724).
Springer DOI
2103
BibRef
Esquivel, J.Z.[Julio Zamora],
Cruz Vargas, J.A.[Jesus Adan],
Tickoo, O.[Omesh],
Second Order Bifurcating Methodology for Neural Network Training and
Topology Optimization,
CADL20(725-738).
Springer DOI
2103
BibRef
Zamora Esquivel, J.,
Cruz Vargas, A.,
Lopez Meyer, P.,
Tickoo, O.,
Adaptive Convolutional Kernels,
NeruArch19(1998-2005)
IEEE DOI
2004
computational complexity, computer vision,
convolutional neural nets, edge detection, feature extraction,
machine learning
BibRef
Köpüklü, O.,
Kose, N.,
Gunduz, A.,
Rigoll, G.,
Resource Efficient 3D Convolutional Neural Networks,
NeruArch19(1910-1919)
IEEE DOI
2004
convolutional neural nets, graphics processing units,
learning (artificial intelligence), UCF-101 dataset,
Action/Activity Recognition
BibRef
Ma, X.,
Triki, A.R.,
Berman, M.,
Sagonas, C.,
Cali, J.,
Blaschko, M.,
A Bayesian Optimization Framework for Neural Network Compression,
ICCV19(10273-10282)
IEEE DOI
2004
approximation theory, Bayes methods, data compression, neural nets,
optimisation, neural network compression, Training
BibRef
Yoo, K.M.,
Jo, H.S.,
Lee, H.,
Han, J.,
Lee, S.,
Stochastic Relational Network,
SDL-CV19(788-792)
IEEE DOI
2004
computational complexity, data visualisation,
inference mechanisms, learning (artificial intelligence),
gradient estimator
BibRef
Rannen-Triki, A.,
Berman, M.,
Kolmogorov, V.,
Blaschko, M.B.,
Function Norms for Neural Networks,
SDL-CV19(748-752)
IEEE DOI
2004
computational complexity, function approximation,
learning (artificial intelligence), neural nets,
Regularization
BibRef
Han, D.,
Yoo, H.,
Direct Feedback Alignment Based Convolutional Neural Network Training
for Low-Power Online Learning Processor,
LPCV19(2445-2452)
IEEE DOI
2004
backpropagation, convolutional neural nets,
learning (artificial intelligence), DFA algorithm, CNN training,
Back propagation
BibRef
Yan, X.,
Chen, Z.,
Xu, A.,
Wang, X.,
Liang, X.,
Lin, L.,
Meta R-CNN: Towards General Solver for Instance-Level Low-Shot
Learning,
ICCV19(9576-9585)
IEEE DOI
2004
Code, Learning.
HTML Version. computer vision, convolutional neural nets, image representation,
image sampling, image segmentation, Object recognition
BibRef
Dai, X.L.[Xiao-Liang],
Zhang, P.Z.[Pei-Zhao],
Wu, B.[Bichen],
Yin, H.X.[Hong-Xu],
Sun, F.[Fei],
Wang, Y.[Yanghan],
Dukhan, M.[Marat],
Hu, Y.Q.[Yun-Qing],
Wu, Y.M.[Yi-Ming],
Jia, Y.Q.[Yang-Qing],
Vajda, P.[Peter],
Uyttendaele, M.[Matt],
Jha, N.K.[Niraj K.],
ChamNet: Towards Efficient Network Design Through Platform-Aware Model
Adaptation,
CVPR19(11390-11399).
IEEE DOI
2002
BibRef
Gao, S.Q.[Shang-Qian],
Deng, C.[Cheng],
Huang, H.[Heng],
Cross Domain Model Compression by Structurally Weight Sharing,
CVPR19(8965-8974).
IEEE DOI
2002
BibRef
Yang, J.[Jiwei],
Shen, X.[Xu],
Xing, J.[Jun],
Tian, X.M.[Xin-Mei],
Li, H.Q.A.[Hou-Qi-Ang],
Deng, B.[Bing],
Huang, J.Q.[Jian-Qiang],
Hua, X.S.[Xian-Sheng],
Quantization Networks,
CVPR19(7300-7308).
IEEE DOI
2002
BibRef
Liu, Y.J.[Ya-Jing],
Tian, X.M.[Xin-Mei],
Li, Y.[Ya],
Xiong, Z.W.[Zhi-Wei],
Wu, F.[Feng],
Compact Feature Learning for Multi-Domain Image Classification,
CVPR19(7186-7194).
IEEE DOI
2002
BibRef
Li, J.[Jiashi],
Qi, Q.[Qi],
Wang, J.Y.[Jing-Yu],
Ge, C.[Ce],
Li, Y.J.[Yu-Jian],
Yue, Z.Z.[Zhang-Zhang],
Sun, H.F.[Hai-Feng],
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural
Networks,
CVPR19(7039-7048).
IEEE DOI
2002
BibRef
Kim, H.[Hyeji],
Khan, M.U.K.[Muhammad Umar Karim],
Kyung, C.M.[Chong-Min],
Efficient Neural Network Compression,
CVPR19(12561-12569).
IEEE DOI
2002
BibRef
Minnehan, B.[Breton],
Savakis, A.[Andreas],
Cascaded Projection: End-To-End Network Compression and Acceleration,
CVPR19(10707-10716).
IEEE DOI
2002
BibRef
Lin, Y.H.[Yu-Hsun],
Chou, C.N.[Chun-Nan],
Chang, E.Y.[Edward Y.],
MBS: Macroblock Scaling for CNN Model Reduction,
CVPR19(9109-9117).
IEEE DOI
2002
BibRef
Gao, Y.[Yuan],
Ma, J.[Jiayi],
Zhao, M.B.[Ming-Bo],
Liu, W.[Wei],
Yuille, A.L.[Alan L.],
NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural
Discriminative Dimensionality Reduction,
CVPR19(3200-3209).
IEEE DOI
2002
BibRef
Wang, H.Y.[Hui-Yu],
Kembhavi, A.[Aniruddha],
Farhadi, A.[Ali],
Yuille, A.L.[Alan L.],
Rastegari, M.[Mohammad],
ELASTIC: Improving CNNs With Dynamic Scaling Policies,
CVPR19(2253-2262).
IEEE DOI
2002
BibRef
Cao, S.J.[Shi-Jie],
Ma, L.X.[Ling-Xiao],
Xiao, W.C.[Wen-Cong],
Zhang, C.[Chen],
Liu, Y.X.[Yun-Xin],
Zhang, L.T.[Lin-Tao],
Nie, L.S.[Lan-Shun],
Yang, Z.[Zhi],
SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity
Through Low-Bit Quantization,
CVPR19(11208-11217).
IEEE DOI
2002
BibRef
Yang, H.C.[Hai-Chuan],
Zhu, Y.[Yuhao],
Liu, J.[Ji],
ECC: Platform-Independent Energy-Constrained Deep Neural Network
Compression via a Bilinear Regression Model,
CVPR19(11198-11207).
IEEE DOI
2002
BibRef
Gong, L.[Liyu],
Cheng, Q.A.[Qi-Ang],
Exploiting Edge Features for Graph Neural Networks,
CVPR19(9203-9211).
IEEE DOI
2002
BibRef
Mehta, S.[Sachin],
Rastegari, M.[Mohammad],
Shapiro, L.[Linda],
Hajishirzi, H.[Hannaneh],
ESPNetv2: A Light-Weight, Power Efficient, and General Purpose
Convolutional Neural Network,
CVPR19(9182-9192).
IEEE DOI
2002
BibRef
Kossaifi, J.[Jean],
Bulat, A.[Adrian],
Tzimiropoulos, G.[Georgios],
Pantic, M.[Maja],
T-Net: Parametrizing Fully Convolutional Nets With a Single High-Order
Tensor,
CVPR19(7814-7823).
IEEE DOI
2002
BibRef
Chen, W.J.[Wei-Jie],
Xie, D.[Di],
Zhang, Y.[Yuan],
Pu, S.L.[Shi-Liang],
All You Need Is a Few Shifts: Designing Efficient Convolutional Neural
Networks for Image Classification,
CVPR19(7234-7243).
IEEE DOI
2002
BibRef
Georgiadis, G.[Georgios],
Accelerating Convolutional Neural Networks via Activation Map
Compression,
CVPR19(7078-7088).
IEEE DOI
2002
BibRef
Zhu, S.L.[Shi-Lin],
Dong, X.[Xin],
Su, H.[Hao],
Binary Ensemble Neural Network: More Bits per Network or More Networks
per Bit?,
CVPR19(4918-4927).
IEEE DOI
2002
BibRef
Jung, S.[Sangil],
Son, C.Y.[Chang-Yong],
Lee, S.[Seohyung],
Son, J.[Jinwoo],
Han, J.J.[Jae-Joon],
Kwak, Y.[Youngjun],
Hwang, S.J.[Sung Ju],
Choi, C.K.[Chang-Kyu],
Learning to Quantize Deep Networks by Optimizing Quantization Intervals
With Task Loss,
CVPR19(4345-4354).
IEEE DOI
2002
BibRef
Li, T.[Tuanhui],
Wu, B.Y.[Bao-Yuan],
Yang, Y.[Yujiu],
Fan, Y.[Yanbo],
Zhang, Y.[Yong],
Liu, W.[Wei],
Compressing Convolutional Neural Networks via Factorized Convolutional
Filters,
CVPR19(3972-3981).
IEEE DOI
2002
BibRef
Kim, E.[Eunwoo],
Ahn, C.[Chanho],
Torr, P.H.S.[Philip H.S.],
Oh, S.H.[Song-Hwai],
Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks,
CVPR19(2705-2714).
IEEE DOI
2002
BibRef
He, T.[Tong],
Zhang, Z.[Zhi],
Zhang, H.[Hang],
Zhang, Z.Y.[Zhong-Yue],
Xie, J.Y.[Jun-Yuan],
Li, M.[Mu],
Bag of Tricks for Image Classification with Convolutional Neural
Networks,
CVPR19(558-567).
IEEE DOI
2002
BibRef
Wang, X.J.[Xi-Jun],
Kan, M.[Meina],
Shan, S.G.[Shi-Guang],
Chen, X.L.[Xi-Lin],
Fully Learnable Group Convolution for Acceleration of Deep Neural
Networks,
CVPR19(9041-9050).
IEEE DOI
2002
BibRef
Li, Y.[Yuchao],
Lin, S.H.[Shao-Hui],
Zhang, B.C.[Bao-Chang],
Liu, J.Z.[Jian-Zhuang],
Doermann, D.[David],
Wu, Y.[Yongjian],
Huang, F.Y.[Fei-Yue],
Ji, R.R.[Rong-Rong],
Exploiting Kernel Sparsity and Entropy for Interpretable CNN
Compression,
CVPR19(2795-2804).
IEEE DOI
2002
BibRef
Zhao, R.[Ritchie],
Hu, Y.[Yuwei],
Dotzel, J.[Jordan],
de Sa, C.[Christopher],
Zhang, Z.[Zhiru],
Building Efficient Deep Neural Networks With Unitary Group Convolutions,
CVPR19(11295-11304).
IEEE DOI
2002
BibRef
Qiao, S.Y.[Si-Yuan],
Lin, Z.[Zhe],
Zhang, J.M.[Jian-Ming],
Yuille, A.L.[Alan L.],
Neural Rejuvenation: Improving Deep Network Training by Enhancing
Computational Resource Utilization,
CVPR19(61-71).
IEEE DOI
2002
BibRef
Tagaris, T.[Thanos],
Sdraka, M.[Maria],
Stafylopatis, A.[Andreas],
High-Resolution Class Activation Mapping,
ICIP19(4514-4518)
IEEE DOI
1910
Discriminative localization, Class Activation Map, Deep Learning,
Convolutional Neural Networks
BibRef
Lubana, E.S.,
Dick, R.P.,
Aggarwal, V.,
Pradhan, P.M.,
Minimalistic Image Signal Processing for Deep Learning Applications,
ICIP19(4165-4169)
IEEE DOI
1910
Deep learning accelerators, Image signal processor, RAW images, Covariate shift
BibRef
Saha, A.[Avinab],
Ram, K.S.[K. Sai],
Mukhopadhyay, J.[Jayanta],
Das, P.P.[Partha Pratim],
Patra, A.[Amit],
Fitness Based Layer Rank Selection Algorithm for Accelerating CNNs by
Candecomp/Parafac (CP) Decompositions,
ICIP19(3402-3406)
IEEE DOI
1910
CP Decompositions, FLRS, Accelerating CNNs, Rank Selection, Compression
BibRef
Xu, D.,
Lee, M.L.,
Hsu, W.,
Patch-Level Regularizer for Convolutional Neural Network,
ICIP19(3232-3236)
IEEE DOI
1910
BibRef
Yoshioka, K.,
Lee, E.,
Wong, S.,
Horowitz, M.,
Dataset Culling: Towards Efficient Training of Distillation-Based
Domain Specific Models,
ICIP19(3237-3241)
IEEE DOI
1910
Object Detection, Training Efficiency, Distillation,
Dataset Culling, Deep Learning
BibRef
Kim, M.,
Park, C.,
Kim, S.,
Hong, T.,
Ro, W.W.,
Efficient Dilated-Winograd Convolutional Neural Networks,
ICIP19(2711-2715)
IEEE DOI
1910
Image processing and computer vision, dilated convolution,
Winograd convolution, neural network, graphics processing unit
BibRef
Saporta, A.,
Chen, Y.,
Blot, M.,
Cord, M.,
Reve: Regularizing Deep Learning with Variational Entropy Bound,
ICIP19(1610-1614)
IEEE DOI
1910
Deep learning, regularization, invariance, information theory,
image understanding
BibRef
Zhao, W.,
Yi, R.,
Liu, Y.,
An Adaptive Filter for Deep Learning Networks on Large-Scale Point
Cloud,
ICIP19(1620-1624)
IEEE DOI
1910
Large-scale point cloud filtering, super-points, deep learning
BibRef
Mitschke, N.,
Heizmann, M.,
Noffz, K.,
Wittmann, R.,
A Fixed-Point Quantization Technique for Convolutional Neural
Networks Based on Weight Scaling,
ICIP19(3836-3840)
IEEE DOI
1910
CNNs, Fixed Point Quantization, Image Processing, Machine Vision, Deep Learning
BibRef
Choi, Y.,
Choi, J.,
Moon, H.,
Lee, J.,
Chang, J.,
Accelerating Framework for Simultaneous Optimization of Model
Architectures and Training Hyperparameters,
ICIP19(3831-3835)
IEEE DOI
1910
Deep Learning, Model Hyperparameters
BibRef
Zhe, W.,
Lin, J.,
Chandrasekhar, V.,
Girod, B.,
Optimizing the Bit Allocation for Compression of Weights and
Activations of Deep Neural Networks,
ICIP19(3826-3830)
IEEE DOI
1910
Deep Learning, Coding, Compression
BibRef
Lei, X.,
Liu, L.,
Zhou, Z.,
Sun, H.,
Zheng, N.,
Exploring Hardware Friendly Bottleneck Architecture in CNN for
Embedded Computing Systems,
ICIP19(4180-4184)
IEEE DOI
1910
Lightweight/Mobile CNN model, Model optimization,
Embedded System, Hardware Accelerating.
BibRef
Geng, X.[Xue],
Lin, J.[Jie],
Zhao, B.[Bin],
Kong, A.[Anmin],
Aly, M.M.S.[Mohamed M. Sabry],
Chandrasekhar, V.[Vijay],
Hardware-Aware Softmax Approximation for Deep Neural Networks,
ACCV18(IV:107-122).
Springer DOI
1906
BibRef
Chen, W.C.[Wei-Chun],
Chang, C.C.[Chia-Che],
Lee, C.R.[Che-Rung],
Knowledge Distillation with Feature Maps for Image Classification,
ACCV18(III:200-215).
Springer DOI
1906
BibRef
Groh, F.[Fabian],
Wieschollek, P.[Patrick],
Lensch, H.P.A.[Hendrik P. A.],
Flex-Convolution,
ACCV18(I:105-122).
Springer DOI
1906
BibRef
Yang, L.[Lu],
Song, Q.[Qing],
Li, Z.X.[Zuo-Xin],
Wu, Y.Q.[Ying-Qi],
Li, X.J.[Xiao-Jie],
Hu, M.J.[Meng-Jie],
Cross Connected Network for Efficient Image Recognition,
ACCV18(I:56-71).
Springer DOI
1906
BibRef
Ignatov, A.[Andrey],
Timofte, R.[Radu],
Chou, W.[William],
Wang, K.[Ke],
Wu, M.[Max],
Hartley, T.[Tim],
Van Gool, L.J.[Luc J.],
AI Benchmark: Running Deep Neural Networks on Android Smartphones,
PerceptualRest18(V:288-314).
Springer DOI
1905
BibRef
Li, X.,
Zhang, S.,
Jiang, B.,
Qi, Y.,
Chuah, M.C.,
Bi, N.,
DAC: Data-Free Automatic Acceleration of Convolutional Networks,
WACV19(1598-1606)
IEEE DOI
1904
convolutional neural nets, image classification,
Internet of Things, learning (artificial intelligence),
Deep learning
BibRef
He, Y.,
Liu, X.,
Zhong, H.,
Ma, Y.,
AddressNet: Shift-Based Primitives for Efficient Convolutional Neural
Networks,
WACV19(1213-1222)
IEEE DOI
1904
convolutional neural nets, coprocessors,
learning (artificial intelligence), parallel algorithms,
Fuses
BibRef
He, Z.Z.[Zhe-Zhi],
Gong, B.Q.[Bo-Qing],
Fan, D.L.[De-Liang],
Optimize Deep Convolutional Neural Network with Ternarized Weights
and High Accuracy,
WACV19(913-921)
IEEE DOI
1904
reduce to -1, 0, +1.
convolutional neural nets, embedded systems,
image classification, image coding, image representation,
Hardware
BibRef
Bicici, U.C.[Ufuk Can],
Keskin, C.[Cem],
Akarun, L.[Lale],
Conditional Information Gain Networks,
ICPR18(1390-1395)
IEEE DOI
1812
Decision trees, Neural networks, Computational modeling, Training,
Routing, Vegetation, Probability distribution
BibRef
Aldana, R.[Rodrigo],
Campos-Macías, L.[Leobardo],
Zamora, J.[Julio],
Gomez-Gutierrez, D.[David],
Cruz, A.[Adan],
Dynamic Learning Rate for Neural Networks:
A Fixed-Time Stability Approach,
ICPR18(1378-1383)
IEEE DOI
1812
Training, Artificial neural networks, Approximation algorithms,
Optimization, Pattern recognition, Heuristic algorithms, Lyapunov methods
BibRef
Kung, H.T.,
McDanel, B.,
Zhang, S.Q.,
Adaptive Tiling: Applying Fixed-size Systolic Arrays To Sparse
Convolutional Neural Networks,
ICPR18(1006-1011)
IEEE DOI
1812
Sparse matrices, Arrays, Convolution, Adaptive arrays,
Microprocessors, Adaptation models
BibRef
Grelsson, B.,
Felsberg, M.,
Improved Learning in Convolutional Neural Networks with Shifted
Exponential Linear Units (ShELUs),
ICPR18(517-522)
IEEE DOI
1812
convolution, feedforward neural nets, learning (artificial intelligence).
BibRef
Zheng, W.,
Zhang, Z.,
Accelerating the Classification of Very Deep Convolutional Network by
A Cascading Approach,
ICPR18(355-360)
IEEE DOI
1812
computational complexity, convolution, entropy,
feedforward neural nets, image classification,
Measurement uncertainty
BibRef
Zhong, G.,
Yao, H.,
Zhou, H.,
Merging Neurons for Structure Compression of Deep Networks,
ICPR18(1462-1467)
IEEE DOI
1812
Neurons, Neural networks, Merging, Computer architecture,
Matrix decomposition, Mathematical model, Prototypes
BibRef
Bhowmik, P.[Pankaj],
Pantho, M.J.H.[M. Jubaer Hossain],
Asadinia, M.[Marjan],
Bobda, C.[Christophe],
Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic
Architecture for Smart Image Sensor,
ECVW18(786-7868)
IEEE DOI
1812
Computer architecture, Image sensors, Visualization,
Program processors, Clocks, Image processing
BibRef
Aggarwal, V.[Vaneet],
Wang, W.L.[Wen-Lin],
Eriksson, B.[Brian],
Sun, Y.F.[Yi-Fan],
Wan, W.Q.[Wen-Qi],
Wide Compression: Tensor Ring Nets,
CVPR18(9329-9338)
IEEE DOI
1812
Neural networks, Image coding, Shape, Merging, Computer architecture
BibRef
Ren, M.Y.[Meng-Ye],
Pokrovsky, A.[Andrei],
Yang, B.[Bin],
Urtasun, R.[Raquel],
SBNet: Sparse Blocks Network for Fast Inference,
CVPR18(8711-8720)
IEEE DOI
1812
Convolution, Kernel, Shape, Object detection, Task analysis
BibRef
Xie, G.T.[Guo-Tian],
Wang, J.D.[Jing-Dong],
Zhang, T.[Ting],
Lai, J.H.[Jian-Huang],
Hong, R.[Richang],
Qi, G.J.[Guo-Jun],
Interleaved Structured Sparse Convolutional Neural Networks,
CVPR18(8847-8856)
IEEE DOI
1812
Convolution, Kernel, Sparse matrices, Redundancy,
Computational modeling, Computer architecture, Computational complexity
BibRef
Kim, E.[Eunwoo],
Ahn, C.[Chanho],
Oh, S.H.[Song-Hwai],
NestedNet: Learning Nested Sparse Structures in Deep Neural Networks,
CVPR18(8669-8678)
IEEE DOI
1812
Task analysis, Knowledge engineering, Neural networks,
Computer architecture, Optimization, Redundancy
BibRef
Bulň, S.R.[Samuel Rota],
Porzi, L.[Lorenzo],
Kontschieder, P.[Peter],
In-place Activated BatchNorm for Memory-Optimized Training of DNNs,
CVPR18(5639-5647)
IEEE DOI
1812
Reduce memory needs.
Training, Buffer storage, Checkpointing, Memory management,
Standards, Semantics
BibRef
Zhang, D.,
clcNet: Improving the Efficiency of Convolutional Neural Network
Using Channel Local Convolutions,
CVPR18(7912-7919)
IEEE DOI
1812
Kernel, Computational modeling, Computational efficiency,
Convolutional neural networks, Stacking, Computer vision
BibRef
Zhuang, B.,
Shen, C.,
Tan, M.,
Liu, L.,
Reid, I.D.,
Towards Effective Low-Bitwidth Convolutional Neural Networks,
CVPR18(7920-7928)
IEEE DOI
1812
Quantization (signal), Training, Neural networks, Optimization,
Zirconium, Hardware, Convolution
BibRef
Kuen, J.,
Kong, X.,
Lin, Z.,
Wang, G.,
Yin, J.,
See, S.,
Tan, Y.,
Stochastic Downsampling for Cost-Adjustable Inference and Improved
Regularization in Convolutional Networks,
CVPR18(7929-7938)
IEEE DOI
1812
Training, Computational modeling, Computational efficiency,
Stochastic processes, Visualization, Network architecture, Computer vision
BibRef
Shazeer, N.,
Fatahalian, K.,
Mark, W.R.,
Mullapudi, R.T.,
HydraNets: Specialized Dynamic Architectures for Efficient Inference,
CVPR18(8080-8089)
IEEE DOI
1812
Computer architecture, Training, Computational modeling,
Task analysis, Computational efficiency, Optimization, Routing
BibRef
Rebuffi, S.,
Vedaldi, A.,
Bilen, H.,
Efficient Parametrization of Multi-domain Deep Neural Networks,
CVPR18(8119-8127)
IEEE DOI
1812
Task analysis, Neural networks, Adaptation models,
Feature extraction, Visualization, Computational modeling, Standards
BibRef
Cao, S.[Sen],
Liu, Y.Z.[Ya-Zhou],
Zhou, C.X.[Chang-Xin],
Sun, Q.S.[Quan-Sen],
Pongsak, L.S.[La-Sang],
Shen, S.M.[Sheng Mei],
ThinNet: An Efficient Convolutional Neural Network for Object
Detection,
ICPR18(836-841)
IEEE DOI
1812
Convolution, Computational modeling, Object detection,
Neural networks, Computer architecture, Training,
ThinNet
BibRef
Kobayashi, T.,
Analyzing Filters Toward Efficient ConvNet,
CVPR18(5619-5628)
IEEE DOI
1812
Convolution, Feature extraction, Neurons, Image reconstruction,
Visualization, Shape, Computer vision
BibRef
Chou, Y.,
Chan, Y.,
Lee, J.,
Chiu, C.,
Chen, C.,
Merging Deep Neural Networks for Mobile Devices,
EfficientDeep18(1767-17678)
IEEE DOI
1812
Task analysis, Convolution, Merging, Computational modeling,
Neural networks, Kernel, Computer architecture
BibRef
Zhang, Q.,
Zhang, M.,
Wang, M.,
Sui, W.,
Meng, C.,
Yang, J.,
Kong, W.,
Cui, X.,
Lin, W.,
Efficient Deep Learning Inference Based on Model Compression,
EfficientDeep18(1776-17767)
IEEE DOI
1812
Computational modeling, Convolution, Adaptation models,
Image edge detection, Quantization (signal), Kernel
BibRef
Faraone, J.,
Fraser, N.,
Blott, M.,
Leong, P.H.W.,
SYQ: Learning Symmetric Quantization for Efficient Deep Neural
Networks,
CVPR18(4300-4309)
IEEE DOI
1812
Quantization (signal), Hardware, Symmetric matrices, Training,
Complexity theory, Neural networks, Field programmable gate arrays
BibRef
Ma, N.N.[Ning-Ning],
Zhang, X.Y.[Xiang-Yu],
Zheng, H.T.[Hai-Tao],
Sun, J.[Jian],
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
Design,
ECCV18(XIV: 122-138).
Springer DOI
1810
BibRef
Zhang, X.Y.[Xiang-Yu],
Zhou, X.,
Lin, M.,
Sun, J.,
ShuffleNet: An Extremely Efficient Convolutional Neural Network for
Mobile Devices,
CVPR18(6848-6856)
IEEE DOI
1812
Convolution, Complexity theory, Computer architecture,
Mobile handsets, Computational modeling, Task analysis, Neural networks
BibRef
Prabhu, A.[Ameya],
Varma, G.[Girish],
Namboodiri, A.[Anoop],
Deep Expander Networks: Efficient Deep Networks from Graph Theory,
ECCV18(XIII: 20-36).
Springer DOI
1810
BibRef
Freeman, I.,
Roese-Koerner, L.,
Kummert, A.,
Effnet: An Efficient Structure for Convolutional Neural Networks,
ICIP18(6-10)
IEEE DOI
1809
Convolution, Computational modeling, Optimization, Hardware, Kernel,
Data compression, Convolutional neural networks,
real-time inference
BibRef
Elordi, U.[Unai],
Unzueta, L.[Luis],
Arganda-Carreras, I.[Ignacio],
Otaegui, O.[Oihana],
How Can Deep Neural Networks Be Generated Efficiently for Devices with
Limited Resources?,
AMDO18(24-33).
Springer DOI
1807
BibRef
Lee, T.K.[Tae Kwan],
Baddar, W.J.[Wissam J.],
Kim, S.T.[Seong Tae],
Ro, Y.M.[Yong Man],
Convolution with Logarithmic Filter Groups for Efficient Shallow CNN,
MMMod18(I:117-129).
Springer DOI
1802
filter grouping in convolution layers.
BibRef
Véniat, T.[Tom],
Denoyer, L.[Ludovic],
Learning Time/Memory-Efficient Deep Architectures with Budgeted Super
Networks,
CVPR18(3492-3500)
IEEE DOI
1812
Computational modeling, Computer architecture,
Stochastic processes, Neural networks, Fabrics, Predictive models, Computer vision
BibRef
Huang, G.[Gao],
Liu, Z.[Zhuang],
van der Maaten, L.[Laurens],
Weinberger, K.Q.[Kilian Q.],
Densely Connected Convolutional Networks,
CVPR17(2261-2269)
IEEE DOI
1711
Award, CVPR. Convolution, Convolutional codes, Network architecture,
Neural networks, Road transportation, Training
BibRef
Huang, G.[Gao],
Sun, Y.[Yu],
Liu, Z.[Zhuang],
Sedra, D.[Daniel],
Weinberger, K.Q.[Kilian Q.],
Deep Networks with Stochastic Depth,
ECCV16(IV: 646-661).
Springer DOI
1611
BibRef
Huang, G.[Gao],
Liu, S.C.[Shi-Chen],
van der Maaten, L.[Laurens],
Weinberger, K.Q.[Kilian Q.],
CondenseNet: An Efficient DenseNet Using Learned Group Convolutions,
CVPR18(2752-2761)
IEEE DOI
1812
CNN on a phone.
Training, Computer architecture, Computational modeling, Standards,
Mobile handsets, Network architecture, Indexes
BibRef
Zhao, G.,
Zhang, Z.,
Guan, H.,
Tang, P.,
Wang, J.,
Rethinking ReLU to Train Better CNNs,
ICPR18(603-608)
IEEE DOI
1812
Convolution, Tensile stress, Network architecture,
Computational efficiency, Computational modeling, Pattern recognition
BibRef
Chan, M.,
Scarafoni, D.,
Duarte, R.,
Thornton, J.,
Skelly, L.,
Learning Network Architectures of Deep CNNs Under Resource
Constraints,
EfficientDeep18(1784-17847)
IEEE DOI
1812
Computer architecture, Computational modeling, Optimization,
Adaptation models, Network architecture, Linear programming, Training
BibRef
Frickenstein, A.,
Unger, C.,
Stechele, W.,
Resource-Aware Optimization of DNNs for Embedded Applications,
CRV19(17-24)
IEEE DOI
1908
Optimization, Hardware, Computational modeling,
Quantization (signal), Training, Sensitivity, Autonomous vehicles,
CNN
BibRef
Bhagoji, A.N.[Arjun Nitin],
He, W.[Warren],
Li, B.[Bo],
Song, D.[Dawn],
Practical Black-Box Attacks on Deep Neural Networks Using Efficient
Query Mechanisms,
ECCV18(XII: 158-174).
Springer DOI
1810
BibRef
Kuen, J.[Jason],
Kong, X.F.[Xiang-Fei],
Wang, G.[Gang],
Tan, Y.P.[Yap-Peng],
DelugeNets: Deep Networks with Efficient and Flexible Cross-Layer
Information Inflows,
CEFR-LCV17(958-966)
IEEE DOI
1802
Complexity theory, Computational modeling,
Convolution, Correlation, Neural networks
BibRef
Singh, A.,
Kingsbury, N.G.,
Efficient Convolutional Network Learning Using Parametric Log Based
Dual-Tree Wavelet ScatterNet,
CEFR-LCV17(1140-1147)
IEEE DOI
1802
Computer architecture, Feature extraction,
Personal area networks, Standards, Training
BibRef
Liu, Z.,
Li, J.,
Shen, Z.,
Huang, G.,
Yan, S.,
Zhang, C.,
Learning Efficient Convolutional Networks through Network Slimming,
ICCV17(2755-2763)
IEEE DOI
1802
convolution, image classification,
learning (artificial intelligence), neural nets, CNNs,
Training
BibRef
Ioannou, Y.,
Robertson, D.,
Cipolla, R.,
Criminisi, A.,
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups,
CVPR17(5977-5986)
IEEE DOI
1711
Computational complexity, Computational modeling,
Computer architecture, Convolution, Graphics processing units,
Neural networks, Training
BibRef
Lin, J.H.,
Xing, T.,
Zhao, R.,
Zhang, Z.,
Srivastava, M.,
Tu, Z.,
Gupta, R.K.,
Binarized Convolutional Neural Networks with Separable Filters for
Efficient Hardware Acceleration,
ECVW17(344-352)
IEEE DOI
1709
Backpropagation, Convolution, Field programmable gate arrays,
Filtering theory, Hardware, Kernel, Training
BibRef
Zhang, X.,
Li, Z.,
Loy, C.C.,
Lin, D.,
PolyNet: A Pursuit of Structural Diversity in Very Deep Networks,
CVPR17(3900-3908)
IEEE DOI
1711
Agriculture, Benchmark testing, Computational efficiency,
Diversity reception, Network architecture, Systematics, Training
BibRef
Yan, S.,
Keynotes: Deep learning for visual understanding:
Effectiveness vs. efficiency,
VCIP16(1-1)
IEEE DOI
1701
BibRef
Karmakar, P.,
Teng, S.W.,
Zhang, D.,
Liu, Y.,
Lu, G.,
Improved Tamura Features for Image Classification Using Kernel Based
Descriptors,
DICTA17(1-7)
IEEE DOI
1804
BibRef
And:
Improved Kernel Descriptors for Effective and Efficient Image
Classification,
DICTA17(1-8)
IEEE DOI
1804
BibRef
Earlier:
Combining Pyramid Match Kernel and Spatial Pyramid for Image
Classification,
DICTA16(1-8)
IEEE DOI
1701
Gabor filters,
image colour analysis, image segmentation.
feature extraction, image classification, image colour analysis,
image representation, effective image classification,
BibRef
Karmakar, P.,
Teng, S.W.,
Lu, G.,
Zhang, D.,
Rotation Invariant Spatial Pyramid Matching for Image Classification,
DICTA15(1-8)
IEEE DOI
1603
image classification
BibRef
Opitz, M.[Michael],
Possegger, H.[Horst],
Bischof, H.[Horst],
Efficient Model Averaging for Deep Neural Networks,
ACCV16(II: 205-220).
Springer DOI
1704
BibRef
Zhang, Z.M.[Zi-Ming],
Chen, Y.T.[Yu-Ting],
Saligrama, V.[Venkatesh],
Efficient Training of Very Deep Neural Networks for Supervised
Hashing,
CVPR16(1487-1495)
IEEE DOI
1612
BibRef
Smith, L.N.,
Cyclical Learning Rates for Training Neural Networks,
WACV17(464-472)
IEEE DOI
1609
Computational efficiency, Computer architecture, Neural networks,
Schedules, Training, Tuning
BibRef
Cardona-Escobar, A.F.[Andrés F.],
Giraldo-Forero, A.F.[Andrés F.],
Castro-Ospina, A.E.[Andrés E.],
Jaramillo-Garzón, J.A.[Jorge A.],
Efficient Hyperparameter Optimization in Convolutional Neural Networks
by Learning Curves Prediction,
CIARP17(143-151).
Springer DOI
1802
BibRef
Smith, L.N.,
Hand, E.M.,
Doster, T.,
Gradual DropIn of Layers to Train Very Deep Neural Networks,
CVPR16(4763-4771)
IEEE DOI
1612
BibRef
Pasquet, J.,
Chaumont, M.,
Subsol, G.,
Derras, M.,
Speeding-up a convolutional neural network by connecting an SVM
network,
ICIP16(2286-2290)
IEEE DOI
1610
Computational efficiency
BibRef
Park, W.S.,
Kim, M.,
CNN-based in-loop filtering for coding efficiency improvement,
IVMSP16(1-5)
IEEE DOI
1608
Convolution
BibRef
Moons, B.[Bert],
de Brabandere, B.[Bert],
Van Gool, L.J.[Luc J.],
Verhelst, M.[Marian],
Energy-efficient ConvNets through approximate computing,
WACV16(1-8)
IEEE DOI
1606
Approximation algorithms
BibRef
Li, N.,
Takaki, S.,
Tomiokay, Y.,
Kitazawa, H.,
A multistage dataflow implementation of a Deep Convolutional Neural
Network based on FPGA for high-speed object recognition,
Southwest16(165-168)
IEEE DOI
1605
Acceleration
BibRef
Hsu, F.C.,
Gubbi, J.,
Palaniswami, M.,
Learning Efficiently- The Deep CNNs-Tree Network,
DICTA15(1-7)
IEEE DOI
1603
learning (artificial intelligence)
BibRef
Highlander, T.[Tyler],
Rodriguez, A.[Andres],
Very Efficient Training of Convolutional Neural Networks using Fast
Fourier Transform and Overlap-and-Add,
BMVC15(xx-yy).
DOI Link
1601
BibRef
Zou, X.Y.[Xiao-Yi],
Xu, X.M.[Xiang-Min],
Qing, C.M.[Chun-Mei],
Xing, X.F.[Xiao-Fen],
High speed deep networks based on Discrete Cosine Transformation,
ICIP14(5921-5925)
IEEE DOI
1502
Accuracy
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Pruning .