14.5.7.5.4 Efficient Implementations Convolutional Neural Networks

Chapter Contents (Back)
CNN. Efficient Implementation. Efficiency issues, low power, etc. See also Neural Net Pruning.

Cao, Y.Q.[Yong-Qiang], Chen, Y.[Yang], Khosla, D.[Deepak],
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition,
IJCV(113), No. 1, May 2015, pp. 54-66.
Springer DOI 1506
BibRef

Zhang, X.Y.[Xiang-Yu], Zou, J.H.[Jian-Hua], He, K.M.[Kai-Ming], Sun, J.[Jian],
Accelerating Very Deep Convolutional Networks for Classification and Detection,
PAMI(38), No. 10, October 2016, pp. 1943-1955.
IEEE DOI 1609
Acceleration BibRef

He, Y., Zhang, X.Y.[Xiang-Yu], Sun, J.[Jian],
Channel Pruning for Accelerating Very Deep Neural Networks,
ICCV17(1398-1406)
IEEE DOI 1802
iterative methods, learning (artificial intelligence), least squares approximations, neural nets, regression analysis, Training BibRef

Zhang, X.Y.[Xiang-Yu], Zou, J.H.[Jian-Hua], Ming, X.[Xiang], He, K.M.[Kai-Ming], Sun, J.[Jian],
Efficient and accurate approximations of nonlinear convolutional networks,
CVPR15(1984-1992)
IEEE DOI 1510
BibRef

He, K.M.[Kai-Ming], Zhang, X.Y.[Xiang-Yu], Ren, S.Q.[Shao-Qing], Sun, J.[Jian],
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,
ICCV15(1026-1034)
IEEE DOI 1602
Adaptation models BibRef

Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.,
Efficient Processing of Deep Neural Networks: A Tutorial and Survey,
PIEEE(105), No. 12, December 2017, pp. 2295-2329.
IEEE DOI 1712
Survey, Deep Neural Networks. Artificial intelligence, Benchmark testing, Biological neural networks, Computer architecture, spatial architectures BibRef

Cavigelli, L., Benini, L.,
Origami: A 803-GOp/s/W Convolutional Network Accelerator,
CirSysVideo(27), No. 11, November 2017, pp. 2461-2475.
IEEE DOI 1712
Computer architecture, Computer vision, Feature extraction, Machine learning, Mobile communication, Neural networks, very large scale integration BibRef

Ghesu, F.C.[Florin C.], Krubasik, E., Georgescu, B., Singh, V., Zheng, Y., Hornegger, J.[Joachim], Comaniciu, D.,
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing,
MedImg(35), No. 5, May 2016, pp. 1217-1228.
IEEE DOI 1605
Context BibRef

Revathi, A.R., Kumar, D.[Dhananjay],
An efficient system for anomaly detection using deep learning classifier,
SIViP(11), No. 2, February 2017, pp. 291-299.
WWW Link. 1702
BibRef

Sun, B., Feng, H.,
Efficient Compressed Sensing for Wireless Neural Recording: A Deep Learning Approach,
SPLetters(24), No. 6, June 2017, pp. 863-867.
IEEE DOI 1705
Compressed sensing, Cost function, Dictionaries, Sensors, Training, Wireless communication, Wireless sensor networks, Compressed sensing (CS), deep neural network, wireless neural recording BibRef

Xu, T.B.[Ting-Bing], Yang, P.[Peipei], Zhang, X.Y.[Xu-Yao], Liu, C.L.[Cheng-Lin],
LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation,
PR(88), 2019, pp. 272-284.
Elsevier DOI 1901
Deep network acceleration and compression, Architecture distillation, Lightweight network BibRef

Feng, J.[Jie], Wang, L.[Lin], Yu, H.[Haipeng], Jiao, L.C.[Li-Cheng], Zhang, X.R.[Xiang-Rong],
Divide-and-Conquer Dual-Architecture Convolutional Neural Network for Classification of Hyperspectral Images,
RS(11), No. 5, 2019, pp. xx-yy.
DOI Link 1903
BibRef

Kim, D.H.[Dae Ha], Lee, M.K.[Min Kyu], Lee, S.H.[Seung Hyun], Song, B.C.[Byung Cheol],
Macro unit-based convolutional neural network for very light-weight deep learning,
IVC(87), 2019, pp. 68-75.
Elsevier DOI 1906
BibRef
Earlier: A1, A3, A4, Only:
MUNet: Macro Unit-Based Convolutional Neural Network for Mobile Devices,
EfficientDeep18(1749-17498)
IEEE DOI 1812
Deep neural networks, Light-weight deep learning, Macro-unit. Convolution, Computational complexity, Mobile handsets, Neural networks, Performance evaluation BibRef

Zhang, C.Y.[Chun-Yang], Zhao, Q.[Qi], Chen, C.L.P.[C.L. Philip], Liu, W.X.[Wen-Xi],
Deep compression of probabilistic graphical networks,
PR(96), 2019, pp. 106979.
Elsevier DOI 1909
Deep compression, Probabilistic graphical models, Probabilistic graphical networks, Deep learning BibRef

Brillet, L.F., Mancini, S., Cleyet-Merle, S., Nicolas, M.,
Tunable CNN Compression Through Dimensionality Reduction,
ICIP19(3851-3855)
IEEE DOI 1910
CNN, PCA, compression BibRef

Jiang, Y.[Yanshu], Zhao, T.[Tianli], He, X.Y.[Xiang-Yu], Leng, C.[Cong], Cheng, J.[Jian],
BitStream: An efficient framework for inference of binary neural networks on CPUs,
PRL(125), 2019, pp. 303-309.
Elsevier DOI 1909
Convolutional neural networks, Binary neural networks, Image classification BibRef

Dong, Y.P.[Yin-Peng], Ni, R.K.[Ren-Kun], Li, J.G.[Jian-Guo], Chen, Y.R.[Yu-Rong], Su, H.[Hang], Zhu, J.[Jun],
Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks,
IJCV(127), No. 11-12, December 2019, pp. 1629-1642.
Springer DOI 1911
BibRef

Zhou, A.[Aojun], Yao, A.B.[An-Bang], Wang, K.[Kuan], Chen, Y.R.[Yu-Rong],
Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks,
CVPR18(9426-9435)
IEEE DOI 1812
Computer vision, Pattern recognition BibRef

Lin, S.H.[Shao-Hui], Ji, R.R.[Rong-Rong], Chen, C.[Chao], Tao, D.C.[Da-Cheng], Luo, J.B.[Jie-Bo],
Holistic CNN Compression via Low-Rank Decomposition with Knowledge Transfer,
PAMI(41), No. 12, December 2019, pp. 2889-2905.
IEEE DOI 1911
Knowledge transfer, Image coding, Task analysis, Information exchange, Computational modeling, CNN acceleration BibRef

Zhang, L.[Lin], Bu, X.K.[Xiao-Kang], Li, B.[Bing],
XNORCONV: CNNs accelerator implemented on FPGA using a hybrid CNNs structure and an inter-layer pipeline method,
IET-IPR(14), No. 1, January 2020, pp. 105-113.
DOI Link 1912
BibRef

Chen, Z., Fan, K., Wang, S., Duan, L., Lin, W., Kot, A.C.,
Toward Intelligent Sensing: Intermediate Deep Feature Compression,
IP(29), 2020, pp. 2230-2243.
IEEE DOI 2001
Visualization, Image coding, Task analysis, Feature extraction, Deep learning, Video coding, Standardization, Deep learning, feature compression BibRef

Lobel, H.[Hans], Vidal, R.[René], Soto, A.[Alvaro],
CompactNets: Compact Hierarchical Compositional Networks for Visual Recognition,
CVIU(191), 2020, pp. 102841.
Elsevier DOI 2002
Deep learning, Regularization, Group sparsity, Image categorization BibRef

Ding, L.[Lin], Tian, Y.H.[Yong-Hong], Fan, H.F.[Hong-Fei], Chen, C.H.[Chang-Huai], Huang, T.J.[Tie-Jun],
Joint Coding of Local and Global Deep Features in Videos for Visual Search,
IP(29), 2020, pp. 3734-3749.
IEEE DOI 2002
Local deep feature, joint coding, visual search, inter-feature correlation BibRef

Browne, D.[David], Giering, M.[Michael], Prestwich, S.[Steven],
PulseNetOne: Fast Unsupervised Pruning of Convolutional Neural Networks for Remote Sensing,
RS(12), No. 7, 2020, pp. xx-yy.
DOI Link 2004
BibRef

Liu, Z.C.[Ze-Chun], Luo, W.H.[Wen-Han], Wu, B.Y.[Bao-Yuan], Yang, X.[Xin], Liu, W.[Wei], Cheng, K.T.[Kwang-Ting],
Bi-Real Net: Binarizing Deep Network Towards Real-Network Performance,
IJCV(128), No. 1, January 2020, pp. 202-219.
Springer DOI 2002
BibRef
Earlier: A1, A3, A2, A4, A5, A6:
Bi-Real Net: Enhancing the Performance of 1-Bit CNNs with Improved Representational Capability and Advanced Training Algorithm,
ECCV18(XV: 747-763).
Springer DOI 1810
BibRef

Cavigelli, L., Benini, L.,
CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams,
CirSysVideo(30), No. 5, May 2020, pp. 1451-1465.
IEEE DOI 2005
Learning with video. Feature extraction, Object detection, Throughput, Convolution, Inference algorithms, Semantics, Approximation algorithms, object detection BibRef

Sun, F.Z.[Feng-Zhen], Li, S.J.[Shao-Jie], Wang, S.H.[Shao-Hua], Liu, Q.J.[Qing-Jun], Zhou, L.X.[Li-Xin],
CostNet: A Concise Overpass Spatiotemporal Network for Predictive Learning,
IJGI(9), No. 4, 2020, pp. xx-yy.
DOI Link 2005
ResNet to deal with temporal. BibRef

Kalayeh, M.M.[Mahdi M.], Shah, M.[Mubarak],
Training Faster by Separating Modes of Variation in Batch-Normalized Models,
PAMI(42), No. 6, June 2020, pp. 1483-1500.
IEEE DOI 2005
Training, Kernel, Mathematical model, Transforms, Probability density function, Statistics, Acceleration, fisher vector BibRef

Ma, W.[Wenchi], Wu, Y.[Yuanwei], Cen, F.[Feng], Wang, G.H.[Guang-Hui],
MDFN: Multi-scale deep feature learning network for object detection,
PR(100), 2020, pp. 107149.
Elsevier DOI 2005
Deep feature learning, Multi-scale, Semantic and contextual information, Small and occluded objects BibRef

Ma, W.[Wenchi], Wu, Y.[Yuanwei], Wang, Z., Wang, G.H.[Guang-Hui],
MDCN: Multi-Scale, Deep Inception Convolutional Neural Networks for Efficient Object Detection,
ICPR18(2510-2515)
IEEE DOI 1812
Feature extraction, Object detection, Computational modeling, Task analysis, Convolutional neural networks, Hardware, Real-time systems BibRef


Saini, R., Jha, N.K., Das, B., Mittal, S., Mohan, C.K.,
ULSAM: Ultra-Lightweight Subspace Attention Module for Compact Convolutional Neural Networks,
WACV20(1616-1625)
IEEE DOI 2006
Convolution, Computational modeling, Task analysis, Computational efficiency, Feature extraction, Redundancy, Head BibRef

Suau, X., Zappella, u., Apostoloff, N.,
Filter Distillation for Network Compression,
WACV20(3129-3138)
IEEE DOI 2006
Correlation, Training, Tensile stress, Eigenvalues and eigenfunctions, Image coding, Decorrelation, Principal component analysis BibRef

Wang, M., Cai, H., Huang, X., Gong, M.,
ADNet: Adaptively Dense Convolutional Neural Networks,
WACV20(990-999)
IEEE DOI 2006
Adaptation models, Training, Convolution, Task analysis, Computer architecture, Convolutional neural networks, Computational efficiency BibRef

Oyedotun, O.K., Aouada, D., Ottersten, B.,
Structured Compression of Deep Neural Networks with Debiased Elastic Group LASSO,
WACV20(2266-2275)
IEEE DOI 2006
Computational modeling, Feature extraction, Training, Cost function, Training data, Task analysis, Neural networks BibRef

Yan, S., Fang, B., Zhang, F., Zheng, Y., Zeng, X., Zhang, M., Xu, H.,
HM-NAS: Efficient Neural Architecture Search via Hierarchical Masking,
NeruArch19(1942-1950)
IEEE DOI 2004
Code, Neural Netowrks.
WWW Link. learning (artificial intelligence), neural net architecture, multilevel architecture, flexible network architectures, Hierarchical Masking BibRef

Wu, B.C.[Bi-Chen], Dai, X.L.[Xiao-Liang], Zhang, P.Z.[Pei-Zhao], Wang, Y.H.[Yang-Han], Sun, F.[Fei], Wu, Y.M.[Yi-Ming], Tian, Y.D.[Yuan-Dong], Vajda, P.[Peter], Jia, Y.Q.[Yang-Qing], Keutzer, K.[Kurt],
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search,
CVPR19(10726-10734).
IEEE DOI 2002
BibRef

Hsu, L., Chiu, C., Lin, K.,
An Energy-Aware Bit-Serial Streaming Deep Convolutional Neural Network Accelerator,
ICIP19(4609-4613)
IEEE DOI 1910
CNNs, Hardware Accelerator, EnergyAware, Precision, Bit-Serial PE, Streaming Dataflow BibRef

Lu, J.[Jing], Xu, C.F.[Chao-Fan], Zhang, W.[Wei], Duan, L.Y.[Ling-Yu], Mei, T.[Tao],
Sampling Wisely: Deep Image Embedding by Top-K Precision Optimization,
ICCV19(7960-7969)
IEEE DOI 2004
convolutional neural nets, gradient methods, image processing, learning (artificial intelligence), Toy manufacturing industry BibRef

Cui, J., Chen, P., Li, R., Liu, S., Shen, X., Jia, J.,
Fast and Practical Neural Architecture Search,
ICCV19(6508-6517)
IEEE DOI 2004
learning (artificial intelligence), neural nets, FPNAS, search process, bi-level optimization problem, design networks, Network architecture BibRef

Bashivan, P.[Pouya], Tensen, M.[Mark], Dicarlo, J.[James],
Teacher Guided Architecture Search,
ICCV19(5319-5328)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), neural net architecture, Network architecture BibRef

Nascimento, M.G.D., Prisacariu, V., Fawcett, R.,
DSConv: Efficient Convolution Operator,
ICCV19(5147-5156)
IEEE DOI 2004
convolutional neural nets, neural net architecture, statistical distributions, DSConv, Training data BibRef

Gu, J., Zhao, J., Jiang, X., Zhang, B., Liu, J., Guo, G., Ji, R.,
Bayesian Optimized 1-Bit CNNs,
ICCV19(4908-4916)
IEEE DOI 2004
Bayes methods, convolutional neural nets, feature extraction, image classification, Indexes BibRef

Chao, P., Kao, C., Ruan, Y., Huang, C., Lin, Y.,
HarDNet: A Low Memory Traffic Network,
ICCV19(3551-3560)
IEEE DOI 2004
feature extraction, image segmentation, neural nets, object detection, neural network architectures, MACs, Power demand BibRef

Chen, Y., Fan, H., Xu, B., Yan, Z., Kalantidis, Y., Rohrbach, M., Shuicheng, Y., Feng, J.,
Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks With Octave Convolution,
ICCV19(3434-3443)
IEEE DOI 2004
convolutional neural nets, feature extraction, image classification, image resolution, neural net architecture, Kernel BibRef

Phuong, M.[Mary], Lampert, C.[Christoph],
Distillation-Based Training for Multi-Exit Architectures,
ICCV19(1355-1364)
IEEE DOI 2004
To terminate proessing early. convolutional neural nets, image classification, probability, supervised learning, training procedure, multiexit architectures, BibRef

Chen, X.[Xin], Xie, L.X.[Ling-Xi], Wu, J.[Jun], Tian, Q.[Qi],
Progressive Differentiable Architecture Search: Bridging the Depth Gap Between Search and Evaluation,
ICCV19(1294-1303)
IEEE DOI 2004
Code, Search.
WWW Link. approximation theory, image recognition, learning (artificial intelligence), neural net architecture, Computational modeling BibRef

Zheng, X., Ji, R., Tang, L., Zhang, B., Liu, J., Tian, Q.,
Multinomial Distribution Learning for Effective Neural Architecture Search,
ICCV19(1304-1313)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. graphics processing units, learning (artificial intelligence), neural nets, Search problems BibRef

Chen, Y., Liu, S., Shen, X., Jia, J.,
Fast Point R-CNN,
ICCV19(9774-9783)
IEEE DOI 2004
convolutional neural nets, feature extraction, image representation, object detection, solid modelling, Detectors BibRef

Gkioxari, G., Johnson, J., Malik, J.,
Mesh R-CNN,
ICCV19(9784-9794)
IEEE DOI 2004
computational geometry, convolutional neural nets, feature extraction, graph theory, Benchmark testing BibRef

Vooturi, D.T.[Dharma Teja], Varma, G.[Girish], Kothapalli, K.[Kishore],
Dynamic Block Sparse Reparameterization of Convolutional Neural Networks,
CEFRL19(3046-3053)
IEEE DOI 2004
Code, Convolutional Networks.
WWW Link. convolutional neural nets, image classification, learning (artificial intelligence), dense neural networks, neural networks BibRef

Dong, Z., Yao, Z., Gholami, A., Mahoney, M., Keutzer, K.,
HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-Precision,
ICCV19(293-302)
IEEE DOI 2004
image resolution, neural nets, quantisation (signal), neural networks, mixed-precision quantization, deep networks, Image resolution BibRef

Gusak, J., Kholiavchenko, M., Ponomarev, E., Markeeva, L., Blagoveschensky, P., Cichocki, A., Oseledets, I.,
Automated Multi-Stage Compression of Neural Networks,
LPCV19(2501-2508)
IEEE DOI 2004
approximation theory, iterative methods, matrix decomposition, neural nets, tensors, noniterative ones, automated BibRef

Yan, M., Zhao, M., Xu, Z., Zhang, Q., Wang, G., Su, Z.,
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition,
LFR19(2647-2654)
IEEE DOI 2004
Code, Face Recognition.
WWW Link. convolutional neural nets, face recognition, learning (artificial intelligence), student model, teacher model, knowledge distillation BibRef

Hascoet, T., Febvre, Q., Zhuang, W., Ariki, Y., Takiguchi, T.,
Layer-Wise Invertibility for Extreme Memory Cost Reduction of CNN Training,
NeruArch19(2049-2052)
IEEE DOI 2004
backpropagation, computer vision, convolutional neural nets, graphics processing units, minimal training memory consumption, invertible transformations BibRef

Ghosh, R., Gupta, A.K., Motani, M.,
Investigating Convolutional Neural Networks using Spatial Orderness,
NeruArch19(2053-2056)
IEEE DOI 2004
convolutional neural nets, image classification, statistical analysis, convolutional neural networks, CNN, Opening the black box of CNNs BibRef

Zamora Esquivel, J., Cruz Vargas, A., Lopez Meyer, P., Tickoo, O.,
Adaptive Convolutional Kernels,
NeruArch19(1998-2005)
IEEE DOI 2004
computational complexity, computer vision, convolutional neural nets, edge detection, feature extraction, machine learning BibRef

Köpüklü, O., Kose, N., Gunduz, A., Rigoll, G.,
Resource Efficient 3D Convolutional Neural Networks,
NeruArch19(1910-1919)
IEEE DOI 2004
convolutional neural nets, graphics processing units, learning (artificial intelligence), UCF-101 dataset, Action/Activity Recognition BibRef

Zhu, H., An, Z., Yang, C., Xu, K., Zhao, E., Xu, Y.,
EENA: Efficient Evolution of Neural Architecture,
NeruArch19(1891-1899)
IEEE DOI 2004
learning (artificial intelligence), neural net architecture, search problems, crossover operations, evolution process, guidance of experience gained BibRef

Ma, X., Triki, A.R., Berman, M., Sagonas, C., Cali, J., Blaschko, M.,
A Bayesian Optimization Framework for Neural Network Compression,
ICCV19(10273-10282)
IEEE DOI 2004
approximation theory, Bayes methods, data compression, neural nets, optimisation, neural network compression, Training BibRef

Yoo, K.M., Jo, H.S., Lee, H., Han, J., Lee, S.,
Stochastic Relational Network,
SDL-CV19(788-792)
IEEE DOI 2004
computational complexity, data visualisation, inference mechanisms, learning (artificial intelligence), gradient estimator BibRef

Rannen-Triki, A., Berman, M., Kolmogorov, V., Blaschko, M.B.,
Function Norms for Neural Networks,
SDL-CV19(748-752)
IEEE DOI 2004
computational complexity, function approximation, learning (artificial intelligence), neural nets, Regularization BibRef

Han, D., Yoo, H.,
Direct Feedback Alignment Based Convolutional Neural Network Training for Low-Power Online Learning Processor,
LPCV19(2445-2452)
IEEE DOI 2004
backpropagation, convolutional neural nets, learning (artificial intelligence), DFA algorithm, CNN training, Back propagation BibRef

Yan, X., Chen, Z., Xu, A., Wang, X., Liang, X., Lin, L.,
Meta R-CNN: Towards General Solver for Instance-Level Low-Shot Learning,
ICCV19(9576-9585)
IEEE DOI 2004
Code, Learning.
HTML Version. computer vision, convolutional neural nets, image representation, image sampling, image segmentation, Object recognition BibRef

Dai, X.L.[Xiao-Liang], Zhang, P.Z.[Pei-Zhao], Wu, B.[Bichen], Yin, H.X.[Hong-Xu], Sun, F.[Fei], Wang, Y.[Yanghan], Dukhan, M.[Marat], Hu, Y.Q.[Yun-Qing], Wu, Y.M.[Yi-Ming], Jia, Y.Q.[Yang-Qing], Vajda, P.[Peter], Uyttendaele, M.[Matt], Jha, N.K.[Niraj K.],
ChamNet: Towards Efficient Network Design Through Platform-Aware Model Adaptation,
CVPR19(11390-11399).
IEEE DOI 2002
BibRef

Gao, S.Q.[Shang-Qian], Deng, C.[Cheng], Huang, H.[Heng],
Cross Domain Model Compression by Structurally Weight Sharing,
CVPR19(8965-8974).
IEEE DOI 2002
BibRef

Yang, J.[Jiwei], Shen, X.[Xu], Xing, J.[Jun], Tian, X.M.[Xin-Mei], Li, H.Q.A.[Hou-Qi-Ang], Deng, B.[Bing], Huang, J.Q.[Jian-Qiang], Hua, X.S.[Xian-Sheng],
Quantization Networks,
CVPR19(7300-7308).
IEEE DOI 2002
BibRef

Liu, Y.J.[Ya-Jing], Tian, X.M.[Xin-Mei], Li, Y.[Ya], Xiong, Z.W.[Zhi-Wei], Wu, F.[Feng],
Compact Feature Learning for Multi-Domain Image Classification,
CVPR19(7186-7194).
IEEE DOI 2002
BibRef

Li, J.[Jiashi], Qi, Q.[Qi], Wang, J.Y.[Jing-Yu], Ge, C.[Ce], Li, Y.J.[Yu-Jian], Yue, Z.Z.[Zhang-Zhang], Sun, H.F.[Hai-Feng],
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks,
CVPR19(7039-7048).
IEEE DOI 2002
BibRef

Kim, H.[Hyeji], Khan, M.U.K.[Muhammad Umar Karim], Kyung, C.M.[Chong-Min],
Efficient Neural Network Compression,
CVPR19(12561-12569).
IEEE DOI 2002
BibRef

Minnehan, B.[Breton], Savakis, A.[Andreas],
Cascaded Projection: End-To-End Network Compression and Acceleration,
CVPR19(10707-10716).
IEEE DOI 2002
BibRef

Lin, Y.H.[Yu-Hsun], Chou, C.N.[Chun-Nan], Chang, E.Y.[Edward Y.],
MBS: Macroblock Scaling for CNN Model Reduction,
CVPR19(9109-9117).
IEEE DOI 2002
BibRef

Gao, Y.[Yuan], Ma, J.[Jiayi], Zhao, M.B.[Ming-Bo], Liu, W.[Wei], Yuille, A.L.[Alan L.],
NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction,
CVPR19(3200-3209).
IEEE DOI 2002
BibRef

Wang, H.Y.[Hui-Yu], Kembhavi, A.[Aniruddha], Farhadi, A.[Ali], Yuille, A.L.[Alan L.], Rastegari, M.[Mohammad],
ELASTIC: Improving CNNs With Dynamic Scaling Policies,
CVPR19(2253-2262).
IEEE DOI 2002
BibRef

Cao, S.J.[Shi-Jie], Ma, L.X.[Ling-Xiao], Xiao, W.C.[Wen-Cong], Zhang, C.[Chen], Liu, Y.X.[Yun-Xin], Zhang, L.T.[Lin-Tao], Nie, L.S.[Lan-Shun], Yang, Z.[Zhi],
SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization,
CVPR19(11208-11217).
IEEE DOI 2002
BibRef

Yang, H.C.[Hai-Chuan], Zhu, Y.[Yuhao], Liu, J.[Ji],
ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model,
CVPR19(11198-11207).
IEEE DOI 2002
BibRef

Gong, L.[Liyu], Cheng, Q.A.[Qi-Ang],
Exploiting Edge Features for Graph Neural Networks,
CVPR19(9203-9211).
IEEE DOI 2002
BibRef

Mehta, S.[Sachin], Rastegari, M.[Mohammad], Shapiro, L.[Linda], Hajishirzi, H.[Hannaneh],
ESPNetv2: A Light-Weight, Power Efficient, and General Purpose Convolutional Neural Network,
CVPR19(9182-9192).
IEEE DOI 2002
BibRef

Kossaifi, J.[Jean], Bulat, A.[Adrian], Tzimiropoulos, G.[Georgios], Pantic, M.[Maja],
T-Net: Parametrizing Fully Convolutional Nets With a Single High-Order Tensor,
CVPR19(7814-7823).
IEEE DOI 2002
BibRef

Chen, W.J.[Wei-Jie], Xie, D.[Di], Zhang, Y.[Yuan], Pu, S.L.[Shi-Liang],
All You Need Is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification,
CVPR19(7234-7243).
IEEE DOI 2002
BibRef

Georgiadis, G.[Georgios],
Accelerating Convolutional Neural Networks via Activation Map Compression,
CVPR19(7078-7088).
IEEE DOI 2002
BibRef

Zhu, S.L.[Shi-Lin], Dong, X.[Xin], Su, H.[Hao],
Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?,
CVPR19(4918-4927).
IEEE DOI 2002
BibRef

Jung, S.[Sangil], Son, C.Y.[Chang-Yong], Lee, S.[Seohyung], Son, J.[Jinwoo], Han, J.J.[Jae-Joon], Kwak, Y.[Youngjun], Hwang, S.J.[Sung Ju], Choi, C.K.[Chang-Kyu],
Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss,
CVPR19(4345-4354).
IEEE DOI 2002
BibRef

Li, T.[Tuanhui], Wu, B.Y.[Bao-Yuan], Yang, Y.[Yujiu], Fan, Y.[Yanbo], Zhang, Y.[Yong], Liu, W.[Wei],
Compressing Convolutional Neural Networks via Factorized Convolutional Filters,
CVPR19(3972-3981).
IEEE DOI 2002
BibRef

Kim, E.[Eunwoo], Ahn, C.[Chanho], Torr, P.H.S.[Philip H.S.], Oh, S.H.[Song-Hwai],
Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks,
CVPR19(2705-2714).
IEEE DOI 2002
BibRef

Tan, M.X.[Ming-Xing], Chen, B.[Bo], Pang, R.[Ruoming], Vasudevan, V.[Vijay], Sandler, M.[Mark], Howard, A.[Andrew], Le, Q.V.[Quoc V.],
MnasNet: Platform-Aware Neural Architecture Search for Mobile,
CVPR19(2815-2823).
IEEE DOI 2002
BibRef

Dong, X.[Xuanyi], Yang, Y.[Yi],
Searching for a Robust Neural Architecture in Four GPU Hours,
CVPR19(1761-1770).
IEEE DOI 2002
BibRef

He, T.[Tong], Zhang, Z.[Zhi], Zhang, H.[Hang], Zhang, Z.Y.[Zhong-Yue], Xie, J.Y.[Jun-Yuan], Li, M.[Mu],
Bag of Tricks for Image Classification with Convolutional Neural Networks,
CVPR19(558-567).
IEEE DOI 2002
BibRef

Wang, X.[Xijun], Kan, M.[Meina], Shan, S.G.[Shi-Guang], Chen, X.L.[Xi-Lin],
Fully Learnable Group Convolution for Acceleration of Deep Neural Networks,
CVPR19(9041-9050).
IEEE DOI 2002
BibRef

Li, Y.[Yuchao], Lin, S.H.[Shao-Hui], Zhang, B.C.[Bao-Chang], Liu, J.Z.[Jian-Zhuang], Doermann, D.[David], Wu, Y.[Yongjian], Huang, F.Y.[Fei-Yue], Ji, R.R.[Rong-Rong],
Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression,
CVPR19(2795-2804).
IEEE DOI 2002
BibRef

Zhao, R.[Ritchie], Hu, Y.[Yuwei], Dotzel, J.[Jordan], De Sa, C.[Christopher], Zhang, Z.[Zhiru],
Building Efficient Deep Neural Networks With Unitary Group Convolutions,
CVPR19(11295-11304).
IEEE DOI 2002
BibRef

Qiao, S.Y.[Si-Yuan], Lin, Z.[Zhe], Zhang, J.M.[Jian-Ming], Yuille, A.L.[Alan L.],
Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization,
CVPR19(61-71).
IEEE DOI 2002
BibRef

Tagaris, T.[Thanos], Sdraka, M.[Maria], Stafylopatis, A.[Andreas],
High-Resolution Class Activation Mapping,
ICIP19(4514-4518)
IEEE DOI 1910
Discriminative localization, Class Activation Map, Deep Learning, Convolutional Neural Networks BibRef

Lubana, E.S., Dick, R.P., Aggarwal, V., Pradhan, P.M.,
Minimalistic Image Signal Processing for Deep Learning Applications,
ICIP19(4165-4169)
IEEE DOI 1910
Deep learning accelerators, Image signal processor, RAW images, Covariate shift BibRef

Sun, L.[Li], Yu, X.Y.[Xiao-Yi], Wang, L.[Liuan], Sun, J.[Jun], Inakoshi, H.[Hiroya], Kobayashi, K.[Ken], Kobashi, H.[Hiromichi],
Automatic Neural Network Search Method for Open Set Recognition,
ICIP19(4090-4094)
IEEE DOI 1910
Neural network search, open set, search space, feature distribution, center loss BibRef

Saha, A.[Avinab], Ram, K.S.[K. Sai], Mukhopadhyay, J.[Jayanta], Das, P.P.[Partha Pratim], Patra, A.[Amit],
Fitness Based Layer Rank Selection Algorithm for Accelerating CNNs by Candecomp/Parafac (CP) Decompositions,
ICIP19(3402-3406)
IEEE DOI 1910
CP Decompositions, FLRS, Accelerating CNNs, Rank Selection, Compression BibRef

Xu, D., Lee, M.L., Hsu, W.,
Patch-Level Regularizer for Convolutional Neural Network,
ICIP19(3232-3236)
IEEE DOI 1910
BibRef

Yoshioka, K., Lee, E., Wong, S., Horowitz, M.,
Dataset Culling: Towards Efficient Training of Distillation-Based Domain Specific Models,
ICIP19(3237-3241)
IEEE DOI 1910
Object Detection, Training Efficiency, Distillation, Dataset Culling, Deep Learning BibRef

Kim, M., Park, C., Kim, S., Hong, T., Ro, W.W.,
Efficient Dilated-Winograd Convolutional Neural Networks,
ICIP19(2711-2715)
IEEE DOI 1910
Image processing and computer vision, dilated convolution, Winograd convolution, neural network, graphics processing unit BibRef

Saporta, A., Chen, Y., Blot, M., Cord, M.,
Reve: Regularizing Deep Learning with Variational Entropy Bound,
ICIP19(1610-1614)
IEEE DOI 1910
Deep learning, regularization, invariance, information theory, image understanding BibRef

Banerjee, S., Chakraborty, S.,
Deepsub: A Novel Subset Selection Framework for Training Deep Learning Architectures,
ICIP19(1615-1619)
IEEE DOI 1910
Submodular optimization, Deep learning BibRef

Zhao, W., Yi, R., Liu, Y.,
An Adaptive Filter for Deep Learning Networks on Large-Scale Point Cloud,
ICIP19(1620-1624)
IEEE DOI 1910
Large-scale point cloud filtering, super-points, deep learning BibRef

Mitschke, N., Heizmann, M., Noffz, K., Wittmann, R.,
A Fixed-Point Quantization Technique for Convolutional Neural Networks Based on Weight Scaling,
ICIP19(3836-3840)
IEEE DOI 1910
CNNs, Fixed Point Quantization, Image Processing, Machine Vision, Deep Learning BibRef

Choi, Y., Choi, J., Moon, H., Lee, J., Chang, J.,
Accelerating Framework for Simultaneous Optimization of Model Architectures and Training Hyperparameters,
ICIP19(3831-3835)
IEEE DOI 1910
Deep Learning, Model Hyperparameters BibRef

Zhe, W., Lin, J., Chandrasekhar, V., Girod, B.,
Optimizing the Bit Allocation for Compression of Weights and Activations of Deep Neural Networks,
ICIP19(3826-3830)
IEEE DOI 1910
Deep Learning, Coding, Compression BibRef

Lei, X., Liu, L., Zhou, Z., Sun, H., Zheng, N.,
Exploring Hardware Friendly Bottleneck Architecture in CNN for Embedded Computing Systems,
ICIP19(4180-4184)
IEEE DOI 1910
Lightweight/Mobile CNN model, Model optimization, Embedded System, Hardware Accelerating. BibRef

Geng, X.[Xue], Lin, J.[Jie], Zhao, B.[Bin], Kong, A.[Anmin], Aly, M.M.S.[Mohamed M. Sabry], Chandrasekhar, V.[Vijay],
Hardware-Aware Softmax Approximation for Deep Neural Networks,
ACCV18(IV:107-122).
Springer DOI 1906
BibRef

Chen, W.C.[Wei-Chun], Chang, C.C.[Chia-Che], Lee, C.R.[Che-Rung],
Knowledge Distillation with Feature Maps for Image Classification,
ACCV18(III:200-215).
Springer DOI 1906
BibRef

Groh, F.[Fabian], Wieschollek, P.[Patrick], Lensch, H.P.A.[Hendrik P. A.],
Flex-Convolution,
ACCV18(I:105-122).
Springer DOI 1906
BibRef

Yang, L.[Lu], Song, Q.[Qing], Li, Z.X.[Zuo-Xin], Wu, Y.Q.[Ying-Qi], Li, X.J.[Xiao-Jie], Hu, M.J.[Meng-Jie],
Cross Connected Network for Efficient Image Recognition,
ACCV18(I:56-71).
Springer DOI 1906
BibRef

Ignatov, A.[Andrey], Timofte, R.[Radu], Chou, W.[William], Wang, K.[Ke], Wu, M.[Max], Van Gool, T.H.L.J.[Tim Hartley Luc J.],
AI Benchmark: Running Deep Neural Networks on Android Smartphones,
PerceptualRest18(V:288-314).
Springer DOI 1905
BibRef

Li, X., Zhang, S., Jiang, B., Qi, Y., Chuah, M.C., Bi, N.,
DAC: Data-Free Automatic Acceleration of Convolutional Networks,
WACV19(1598-1606)
IEEE DOI 1904
convolutional neural nets, image classification, Internet of Things, learning (artificial intelligence), Deep learning BibRef

He, Y., Liu, X., Zhong, H., Ma, Y.,
AddressNet: Shift-Based Primitives for Efficient Convolutional Neural Networks,
WACV19(1213-1222)
IEEE DOI 1904
convolutional neural nets, coprocessors, learning (artificial intelligence), parallel algorithms, Fuses BibRef

He, Z.Z.[Zhe-Zhi], Gong, B.Q.[Bo-Qing], Fan, D.L.[De-Liang],
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy,
WACV19(913-921)
IEEE DOI 1904
reduce to -1, 0, +1. convolutional neural nets, embedded systems, image classification, image coding, image representation, Hardware BibRef

Bicici, U.C.[Ufuk Can], Keskin, C.[Cem], Akarun, L.[Lale],
Conditional Information Gain Networks,
ICPR18(1390-1395)
IEEE DOI 1812
Decision trees, Neural networks, Computational modeling, Training, Routing, Vegetation, Probability distribution BibRef

Aldana, R.[Rodrigo], Campos-Macías, L.[Leobardo], Zamora, J.[Julio], Gomez-Gutierrez, D.[David], Cruz, A.[Adan],
Dynamic Learning Rate for Neural Networks: A Fixed-Time Stability Approach,
ICPR18(1378-1383)
IEEE DOI 1812
Training, Artificial neural networks, Approximation algorithms, Optimization, Pattern recognition, Heuristic algorithms, Lyapunov methods BibRef

Kung, H.T., McDanel, B., Zhang, S.Q.,
Adaptive Tiling: Applying Fixed-size Systolic Arrays To Sparse Convolutional Neural Networks,
ICPR18(1006-1011)
IEEE DOI 1812
Sparse matrices, Arrays, Convolution, Adaptive arrays, Microprocessors, Adaptation models BibRef

Grelsson, B., Felsberg, M.,
Improved Learning in Convolutional Neural Networks with Shifted Exponential Linear Units (ShELUs),
ICPR18(517-522)
IEEE DOI 1812
convolution, feedforward neural nets, learning (artificial intelligence). BibRef

Zheng, W., Zhang, Z.,
Accelerating the Classification of Very Deep Convolutional Network by A Cascading Approach,
ICPR18(355-360)
IEEE DOI 1812
computational complexity, convolution, entropy, feedforward neural nets, image classification, Measurement uncertainty BibRef

Zhong, G., Yao, H., Zhou, H.,
Merging Neurons for Structure Compression of Deep Networks,
ICPR18(1462-1467)
IEEE DOI 1812
Neurons, Neural networks, Merging, Computer architecture, Matrix decomposition, Mathematical model, Prototypes BibRef

Bhowmik, P.[Pankaj], Pantho, M.J.H.[M. Jubaer Hossain], Asadinia, M.[Marjan], Bobda, C.[Christophe],
Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor,
ECVW18(786-7868)
IEEE DOI 1812
Computer architecture, Image sensors, Visualization, Program processors, Clocks, Image processing BibRef

Aggarwal, V.[Vaneet], Wang, W.L.[Wen-Lin], Eriksson, B.[Brian], Sun, Y.F.[Yi-Fan], Wan, W.Q.[Wen-Qi],
Wide Compression: Tensor Ring Nets,
CVPR18(9329-9338)
IEEE DOI 1812
Neural networks, Image coding, Shape, Merging, Computer architecture BibRef

Ren, M.[Mengye], Pokrovsky, A.[Andrei], Yang, B.[Bin], Urtasun, R.[Raquel],
SBNet: Sparse Blocks Network for Fast Inference,
CVPR18(8711-8720)
IEEE DOI 1812
Convolution, Kernel, Shape, Object detection, Task analysis BibRef

Xie, G.T.[Guo-Tian], Wang, J.D.[Jing-Dong], Zhang, T.[Ting], Lai, J.H.[Jian-Huang], Hong, R.[Richang], Qi, G.J.[Guo-Jun],
Interleaved Structured Sparse Convolutional Neural Networks,
CVPR18(8847-8856)
IEEE DOI 1812
Convolution, Kernel, Sparse matrices, Redundancy, Computational modeling, Computer architecture, Computational complexity BibRef

Kim, E.[Eunwoo], Ahn, C.[Chanho], Oh, S.[Songhwai],
NestedNet: Learning Nested Sparse Structures in Deep Neural Networks,
CVPR18(8669-8678)
IEEE DOI 1812
Task analysis, Knowledge engineering, Neural networks, Computer architecture, Optimization, Redundancy BibRef

Bulň, S.R.[Samuel Rota], Porzi, L.[Lorenzo], Kontschieder, P.[Peter],
In-place Activated BatchNorm for Memory-Optimized Training of DNNs,
CVPR18(5639-5647)
IEEE DOI 1812
Reduce memory needs. Training, Buffer storage, Checkpointing, Memory management, Standards, Semantics BibRef

Zhang, D.,
clcNet: Improving the Efficiency of Convolutional Neural Network Using Channel Local Convolutions,
CVPR18(7912-7919)
IEEE DOI 1812
Kernel, Computational modeling, Computational efficiency, Convolutional neural networks, Stacking, Computer vision BibRef

Zhuang, B., Shen, C., Tan, M., Liu, L., Reid, I.D.,
Towards Effective Low-Bitwidth Convolutional Neural Networks,
CVPR18(7920-7928)
IEEE DOI 1812
Quantization (signal), Training, Neural networks, Optimization, Zirconium, Hardware, Convolution BibRef

Kuen, J., Kong, X., Lin, Z., Wang, G., Yin, J., See, S., Tan, Y.,
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks,
CVPR18(7929-7938)
IEEE DOI 1812
Training, Computational modeling, Computational efficiency, Stochastic processes, Visualization, Network architecture, Computer vision BibRef

Shazeer, N., Fatahalian, K., Mark, W.R., Mullapudi, R.T.,
HydraNets: Specialized Dynamic Architectures for Efficient Inference,
CVPR18(8080-8089)
IEEE DOI 1812
Computer architecture, Training, Computational modeling, Task analysis, Computational efficiency, Optimization, Routing BibRef

Rebuffi, S., Vedaldi, A., Bilen, H.,
Efficient Parametrization of Multi-domain Deep Neural Networks,
CVPR18(8119-8127)
IEEE DOI 1812
Task analysis, Neural networks, Adaptation models, Feature extraction, Visualization, Computational modeling, Standards BibRef

Cao, S.[Sen], Liu, Y.Z.[Ya-Zhou], Zhou, C.X.[Chang-Xin], Sun, Q.S.[Quan-Sen], Pongsak, L.S.[La-Sang], Shen, S.M.[Sheng Mei],
ThinNet: An Efficient Convolutional Neural Network for Object Detection,
ICPR18(836-841)
IEEE DOI 1812
Convolution, Computational modeling, Object detection, Neural networks, Computer architecture, Training, ThinNet BibRef

Kobayashi, T.,
Analyzing Filters Toward Efficient ConvNet,
CVPR18(5619-5628)
IEEE DOI 1812
Convolution, Feature extraction, Neurons, Image reconstruction, Visualization, Shape, Computer vision BibRef

Chou, Y., Chan, Y., Lee, J., Chiu, C., Chen, C.,
Merging Deep Neural Networks for Mobile Devices,
EfficientDeep18(1767-17678)
IEEE DOI 1812
Task analysis, Convolution, Merging, Computational modeling, Neural networks, Kernel, Computer architecture BibRef

Zhang, Q., Zhang, M., Wang, M., Sui, W., Meng, C., Yang, J., Kong, W., Cui, X., Lin, W.,
Efficient Deep Learning Inference Based on Model Compression,
EfficientDeep18(1776-17767)
IEEE DOI 1812
Computational modeling, Convolution, Adaptation models, Image edge detection, Quantization (signal), Kernel BibRef

Faraone, J., Fraser, N., Blott, M., Leong, P.H.W.,
SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks,
CVPR18(4300-4309)
IEEE DOI 1812
Quantization (signal), Hardware, Symmetric matrices, Training, Complexity theory, Neural networks, Field programmable gate arrays BibRef

Ma, N.N.[Ning-Ning], Zhang, X.Y.[Xiang-Yu], Zheng, H.T.[Hai-Tao], Sun, J.[Jian],
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,
ECCV18(XIV: 122-138).
Springer DOI 1810
BibRef

Zhang, X.Y.[Xiang-Yu], Zhou, X., Lin, M., Sun, J.,
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,
CVPR18(6848-6856)
IEEE DOI 1812
Convolution, Complexity theory, Computer architecture, Mobile handsets, Computational modeling, Task analysis, Neural networks BibRef

Prabhu, A.[Ameya], Varma, G.[Girish], Namboodiri, A.[Anoop],
Deep Expander Networks: Efficient Deep Networks from Graph Theory,
ECCV18(XIII: 20-36).
Springer DOI 1810
BibRef

Freeman, I., Roese-Koerner, L., Kummert, A.,
Effnet: An Efficient Structure for Convolutional Neural Networks,
ICIP18(6-10)
IEEE DOI 1809
Convolution, Computational modeling, Optimization, Hardware, Kernel, Data compression, Convolutional neural networks, real-time inference BibRef

Elordi, U.[Unai], Unzueta, L.[Luis], Arganda-Carreras, I.[Ignacio], Otaegui, O.[Oihana],
How Can Deep Neural Networks Be Generated Efficiently for Devices with Limited Resources?,
AMDO18(24-33).
Springer DOI 1807
BibRef

Prabhu, A.[Ameya], Batchu, V.[Vishal], Gajawada, R.[Rohit], Munagala, S.A.[Sri Aurobindo], Namboodiri, A.[Anoop],
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory,
WACV18(821-829)
IEEE DOI 1806
approximation theory, data compression, image classification, image coding, image representation, Quantization (signal) BibRef

Lee, T.K.[Tae Kwan], Baddar, W.J.[Wissam J.], Kim, S.T.[Seong Tae], Ro, Y.M.[Yong Man],
Convolution with Logarithmic Filter Groups for Efficient Shallow CNN,
MMMod18(I:117-129).
Springer DOI 1802
filter grouping in convolution layers. BibRef

Véniat, T.[Tom], Denoyer, L.[Ludovic],
Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks,
CVPR18(3492-3500)
IEEE DOI 1812
Computational modeling, Computer architecture, Stochastic processes, Neural networks, Fabrics, Predictive models, Computer vision BibRef

Huang, G.[Gao], Liu, Z.[Zhuang], van der Maaten, L.[Laurens], Weinberger, K.Q.[Kilian Q.],
Densely Connected Convolutional Networks,
CVPR17(2261-2269)
IEEE DOI 1711
Award, CVPR. Convolution, Convolutional codes, Network architecture, Neural networks, Road transportation, Training BibRef

Huang, G.[Gao], Sun, Y.[Yu], Liu, Z.[Zhuang], Sedra, D.[Daniel], Weinberger, K.Q.[Kilian Q.],
Deep Networks with Stochastic Depth,
ECCV16(IV: 646-661).
Springer DOI 1611
BibRef

Huang, G.[Gao], Liu, S.C.[Shi-Chen], van der Maaten, L.[Laurens], Weinberger, K.Q.[Kilian Q.],
CondenseNet: An Efficient DenseNet Using Learned Group Convolutions,
CVPR18(2752-2761)
IEEE DOI 1812
CNN on a phone. Training, Computer architecture, Computational modeling, Standards, Mobile handsets, Network architecture, Indexes BibRef

Zhao, G., Zhang, Z., Guan, H., Tang, P., Wang, J.,
Rethinking ReLU to Train Better CNNs,
ICPR18(603-608)
IEEE DOI 1812
Convolution, Tensile stress, Network architecture, Computational efficiency, Computational modeling, Pattern recognition BibRef

Chan, M., Scarafoni, D., Duarte, R., Thornton, J., Skelly, L.,
Learning Network Architectures of Deep CNNs Under Resource Constraints,
EfficientDeep18(1784-17847)
IEEE DOI 1812
Computer architecture, Computational modeling, Optimization, Adaptation models, Network architecture, Linear programming, Training BibRef

Frickenstein, A., Unger, C., Stechele, W.,
Resource-Aware Optimization of DNNs for Embedded Applications,
CRV19(17-24)
IEEE DOI 1908
Optimization, Hardware, Computational modeling, Quantization (signal), Training, Sensitivity, Autonomous vehicles, CNN BibRef

Bhagoji, A.N.[Arjun Nitin], He, W.[Warren], Li, B.[Bo], Song, D.[Dawn],
Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms,
ECCV18(XII: 158-174).
Springer DOI 1810
BibRef

Kuen, J.[Jason], Kong, X.F.[Xiang-Fei], Wang, G.[Gang], Tan, Y.P.[Yap-Peng],
DelugeNets: Deep Networks with Efficient and Flexible Cross-Layer Information Inflows,
CEFR-LCV17(958-966)
IEEE DOI 1802
Complexity theory, Computational modeling, Convolution, Correlation, Neural networks BibRef

Singh, A., Kingsbury, N.G.,
Efficient Convolutional Network Learning Using Parametric Log Based Dual-Tree Wavelet ScatterNet,
CEFR-LCV17(1140-1147)
IEEE DOI 1802
Computer architecture, Feature extraction, Personal area networks, Standards, Training BibRef

Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.,
Learning Efficient Convolutional Networks through Network Slimming,
ICCV17(2755-2763)
IEEE DOI 1802
convolution, image classification, learning (artificial intelligence), neural nets, CNNs, Training BibRef

Ioannou, Y., Robertson, D., Cipolla, R., Criminisi, A.,
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups,
CVPR17(5977-5986)
IEEE DOI 1711
Computational complexity, Computational modeling, Computer architecture, Convolution, Graphics processing units, Neural networks, Training BibRef

Lin, J.H., Xing, T., Zhao, R., Zhang, Z., Srivastava, M., Tu, Z., Gupta, R.K.,
Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration,
ECVW17(344-352)
IEEE DOI 1709
Backpropagation, Convolution, Field programmable gate arrays, Filtering theory, Hardware, Kernel, Training BibRef

Zhang, X., Li, Z., Loy, C.C., Lin, D.,
PolyNet: A Pursuit of Structural Diversity in Very Deep Networks,
CVPR17(3900-3908)
IEEE DOI 1711
Agriculture, Benchmark testing, Computational efficiency, Diversity reception, Network architecture, Systematics, Training BibRef

Yan, S.,
Keynotes: Deep learning for visual understanding: Effectiveness vs. efficiency,
VCIP16(1-1)
IEEE DOI 1701
BibRef

Karmakar, P., Teng, S.W., Zhang, D., Liu, Y., Lu, G.,
Improved Tamura Features for Image Classification Using Kernel Based Descriptors,
DICTA17(1-7)
IEEE DOI 1804
BibRef
And:
Improved Kernel Descriptors for Effective and Efficient Image Classification,
DICTA17(1-8)
IEEE DOI 1804
BibRef
Earlier:
Combining Pyramid Match Kernel and Spatial Pyramid for Image Classification,
DICTA16(1-8)
IEEE DOI 1701
Gabor filters, image colour analysis, image segmentation. feature extraction, image classification, image colour analysis, image representation, effective image classification, BibRef

Karmakar, P., Teng, S.W., Lu, G., Zhang, D.,
Rotation Invariant Spatial Pyramid Matching for Image Classification,
DICTA15(1-8)
IEEE DOI 1603
image classification BibRef

Opitz, M.[Michael], Possegger, H.[Horst], Bischof, H.[Horst],
Efficient Model Averaging for Deep Neural Networks,
ACCV16(II: 205-220).
Springer DOI 1704
BibRef

Zhang, Z.M.[Zi-Ming], Chen, Y.T.[Yu-Ting], Saligrama, V.[Venkatesh],
Efficient Training of Very Deep Neural Networks for Supervised Hashing,
CVPR16(1487-1495)
IEEE DOI 1612
BibRef

Smith, L.N.,
Cyclical Learning Rates for Training Neural Networks,
WACV17(464-472)
IEEE DOI 1609
Computational efficiency, Computer architecture, Neural networks, Schedules, Training, Tuning BibRef

Cardona-Escobar, A.F.[Andrés F.], Giraldo-Forero, A.F.[Andrés F.], Castro-Ospina, A.E.[Andrés E.], Jaramillo-Garzón, J.A.[Jorge A.],
Efficient Hyperparameter Optimization in Convolutional Neural Networks by Learning Curves Prediction,
CIARP17(143-151).
Springer DOI 1802
BibRef

Smith, L.N., Hand, E.M., Doster, T.,
Gradual DropIn of Layers to Train Very Deep Neural Networks,
CVPR16(4763-4771)
IEEE DOI 1612
BibRef

Pasquet, J., Chaumont, M., Subsol, G., Derras, M.,
Speeding-up a convolutional neural network by connecting an SVM network,
ICIP16(2286-2290)
IEEE DOI 1610
Computational efficiency BibRef

Park, W.S., Kim, M.,
CNN-based in-loop filtering for coding efficiency improvement,
IVMSP16(1-5)
IEEE DOI 1608
Convolution BibRef

Moons, B.[Bert], de Brabandere, B.[Bert], Van Gool, L.J.[Luc J.], Verhelst, M.[Marian],
Energy-efficient ConvNets through approximate computing,
WACV16(1-8)
IEEE DOI 1606
Approximation algorithms BibRef

Li, N., Takaki, S., Tomiokay, Y., Kitazawa, H.,
A multistage dataflow implementation of a Deep Convolutional Neural Network based on FPGA for high-speed object recognition,
Southwest16(165-168)
IEEE DOI 1605
Acceleration BibRef

Hsu, F.C., Gubbi, J., Palaniswami, M.,
Learning Efficiently- The Deep CNNs-Tree Network,
DICTA15(1-7)
IEEE DOI 1603
learning (artificial intelligence) BibRef

Highlander, T.[Tyler], Rodriguez, A.[Andres],
Very Efficient Training of Convolutional Neural Networks using Fast Fourier Transform and Overlap-and-Add,
BMVC15(xx-yy).
DOI Link 1601
BibRef

Zou, X.Y.[Xiao-Yi], Xu, X.M.[Xiang-Min], Qing, C.M.[Chun-Mei], Xing, X.F.[Xiao-Fen],
High speed deep networks based on Discrete Cosine Transformation,
ICIP14(5921-5925)
IEEE DOI 1502
Accuracy BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Pruning .


Last update:Jun 29, 2020 at 10:24:28