14.5.7.5.2 Efficient Implementations Convolutional Neural Networks

Chapter Contents (Back)
CNN. Efficient Implementation. Efficiency issues, low power, etc.

Cao, Y.Q.[Yong-Qiang], Chen, Y.[Yang], Khosla, D.[Deepak],
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition,
IJCV(113), No. 1, May 2015, pp. 54-66.
Springer DOI 1506
BibRef

Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.,
Efficient Processing of Deep Neural Networks: A Tutorial and Survey,
PIEEE(105), No. 12, December 2017, pp. 2295-2329.
IEEE DOI 1712
Survey, Deep Neural Networks. Artificial intelligence, Benchmark testing, Biological neural networks, Computer architecture, spatial architectures BibRef

Cavigelli, L., Benini, L.,
Origami: A 803-GOp/s/W Convolutional Network Accelerator,
CirSysVideo(27), No. 11, November 2017, pp. 2461-2475.
IEEE DOI 1712
Computer architecture, Computer vision, Feature extraction, Machine learning, Mobile communication, Neural networks, very large scale integration BibRef

Ghesu, F.C.[Florin C.], Krubasik, E., Georgescu, B., Singh, V., Zheng, Y., Hornegger, J.[Joachim], Comaniciu, D.,
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing,
MedImg(35), No. 5, May 2016, pp. 1217-1228.
IEEE DOI 1605
Context BibRef

Revathi, A.R., Kumar, D.[Dhananjay],
An efficient system for anomaly detection using deep learning classifier,
SIViP(11), No. 2, February 2017, pp. 291-299.
WWW Link. 1702
BibRef

Sun, B., Feng, H.,
Efficient Compressed Sensing for Wireless Neural Recording: A Deep Learning Approach,
SPLetters(24), No. 6, June 2017, pp. 863-867.
IEEE DOI 1705
Compressed sensing, Cost function, Dictionaries, Sensors, Training, Wireless communication, Wireless sensor networks, Compressed sensing (CS), deep neural network, wireless neural recording BibRef

Xu, T.B.[Ting-Bing], Yang, P.[Peipei], Zhang, X.Y.[Xu-Yao], Liu, C.L.[Cheng-Lin],
LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation,
PR(88), 2019, pp. 272-284.
Elsevier DOI 1901
Deep network acceleration and compression, Architecture distillation, Lightweight network BibRef

Feng, J.[Jie], Wang, L.[Lin], Yu, H.[Haipeng], Jiao, L.C.[Li-Cheng], Zhang, X.R.[Xiang-Rong],
Divide-and-Conquer Dual-Architecture Convolutional Neural Network for Classification of Hyperspectral Images,
RS(11), No. 5, 2019, pp. xx-yy.
DOI Link 1903
BibRef

Kim, D.H.[Dae Ha], Lee, M.K.[Min Kyu], Lee, S.H.[Seung Hyun], Song, B.C.[Byung Cheol],
Macro unit-based convolutional neural network for very light-weight deep learning,
IVC(87), 2019, pp. 68-75.
Elsevier DOI 1906
BibRef
Earlier: A1, A3, A4, Only:
MUNet: Macro Unit-Based Convolutional Neural Network for Mobile Devices,
EfficientDeep18(1749-17498)
IEEE DOI 1812
Deep neural networks, Light-weight deep learning, Macro-unit. Convolution, Computational complexity, Mobile handsets, Neural networks, Performance evaluation BibRef

Zhang, C.Y.[Chun-Yang], Zhao, Q.[Qi], Chen, C.L.P.[C.L. Philip], Liu, W.X.[Wen-Xi],
Deep compression of probabilistic graphical networks,
PR(96), 2019, pp. 106979.
Elsevier DOI 1909
Deep compression, Probabilistic graphical models, Probabilistic graphical networks, Deep learning BibRef

Brillet, L.F., Mancini, S., Cleyet-Merle, S., Nicolas, M.,
Tunable CNN Compression Through Dimensionality Reduction,
ICIP19(3851-3855)
IEEE DOI 1910
CNN, PCA, compression BibRef

Dong, Y.P.[Yin-Peng], Ni, R.K.[Ren-Kun], Li, J.G.[Jian-Guo], Chen, Y.R.[Yu-Rong], Su, H.[Hang], Zhu, J.[Jun],
Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks,
IJCV(127), No. 11-12, December 2019, pp. 1629-1642.
Springer DOI 1911
BibRef

Zhou, A.[Aojun], Yao, A.B.[An-Bang], Wang, K.[Kuan], Chen, Y.R.[Yu-Rong],
Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks,
CVPR18(9426-9435)
IEEE DOI 1812
Computer vision, Pattern recognition BibRef

Lin, S.H.[Shao-Hui], Ji, R.R.[Rong-Rong], Chen, C.[Chao], Tao, D.C.[Da-Cheng], Luo, J.B.[Jie-Bo],
Holistic CNN Compression via Low-Rank Decomposition with Knowledge Transfer,
PAMI(41), No. 12, December 2019, pp. 2889-2905.
IEEE DOI 1911
Knowledge transfer, Image coding, Task analysis, Information exchange, Computational modeling, CNN acceleration BibRef

Chen, S.[Shi], Zhao, Q.[Qi],
Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations,
PAMI(41), No. 12, December 2019, pp. 3048-3056.
IEEE DOI 1911
Computational modeling, Computational efficiency, Feature extraction, Task analysis, Convolutional neural networks, convolutional neural networks BibRef


Mummadi, C.K.[Chaithanya Kumar], Genewein, T.[Tim], Zhang, D.[Dan], Brox, T.[Thomas], Fischer, V.[Volker],
Group Pruning Using a Bounded-Lp Norm for Group Gating and Regularization,
GCPR19(139-155).
Springer DOI 1911
BibRef

Tagaris, T.[Thanos], Sdraka, M.[Maria], Stafylopatis, A.[Andreas],
High-Resolution Class Activation Mapping,
ICIP19(4514-4518)
IEEE DOI 1910
Discriminative localization, Class Activation Map, Deep Learning, Convolutional Neural Networks BibRef

Lubana, E.S., Dick, R.P., Aggarwal, V., Pradhan, P.M.,
Minimalistic Image Signal Processing for Deep Learning Applications,
ICIP19(4165-4169)
IEEE DOI 1910
Deep learning accelerators, Image signal processor, RAW images, Covariate shift BibRef

Sun, L.[Li], Yu, X.Y.[Xiao-Yi], Wang, L.[Liuan], Sun, J.[Jun], Inakoshi, H.[Hiroya], Kobayashi, K.[Ken], Kobashi, H.[Hiromichi],
Automatic Neural Network Search Method for Open Set Recognition,
ICIP19(4090-4094)
IEEE DOI 1910
Neural network search, open set, search space, feature distribution, center loss BibRef

Saha, A.[Avinab], Ram, K.S.[K. Sai], Mukhopadhyay, J.[Jayanta], Das, P.P.[Partha Pratim], Patra, A.[Amit],
Fitness Based Layer Rank Selection Algorithm for Accelerating CNNs by Candecomp/Parafac (CP) Decompositions,
ICIP19(3402-3406)
IEEE DOI 1910
CP Decompositions, FLRS, Accelerating CNNs, Rank Selection, Compression BibRef

Xu, D., Lee, M.L., Hsu, W.,
Patch-Level Regularizer for Convolutional Neural Network,
ICIP19(3232-3236)
IEEE DOI 1910
BibRef

Yoshioka, K., Lee, E., Wong, S., Horowitz, M.,
Dataset Culling: Towards Efficient Training of Distillation-Based Domain Specific Models,
ICIP19(3237-3241)
IEEE DOI 1910
Object Detection, Training Efficiency, Distillation, Dataset Culling, Deep Learning BibRef

Wang, W.T.[Wei-Ting], Li, H.L.[Han-Lin], Lin, W.S.[Wei-Shiang], Chiang, C.M.[Cheng-Ming], Tsai, Y.M.[Yi-Min],
Architecture-Aware Network Pruning for Vision Quality Applications,
ICIP19(2701-2705)
IEEE DOI 1910
Pruning, Vision Quality, Network Architecture BibRef

Kim, M., Park, C., Kim, S., Hong, T., Ro, W.W.,
Efficient Dilated-Winograd Convolutional Neural Networks,
ICIP19(2711-2715)
IEEE DOI 1910
Image processing and computer vision, dilated convolution, Winograd convolution, neural network, graphics processing unit BibRef

Saporta, A., Chen, Y., Blot, M., Cord, M.,
Reve: Regularizing Deep Learning with Variational Entropy Bound,
ICIP19(1610-1614)
IEEE DOI 1910
Deep learning, regularization, invariance, information theory, image understanding BibRef

Banerjee, S., Chakraborty, S.,
Deepsub: A Novel Subset Selection Framework for Training Deep Learning Architectures,
ICIP19(1615-1619)
IEEE DOI 1910
Submodular optimization, Deep learning BibRef

Zhao, W., Yi, R., Liu, Y.,
An Adaptive Filter for Deep Learning Networks on Large-Scale Point Cloud,
ICIP19(1620-1624)
IEEE DOI 1910
Large-scale point cloud filtering, super-points, deep learning BibRef

Zhang, Y., Wang, H., Luo, Y., Yu, L., Hu, H., Shan, H., Quek, T.Q.S.,
Three-Dimensional Convolutional Neural Network Pruning with Regularization-Based Method,
ICIP19(4270-4274)
IEEE DOI 1910
3D CNN, video analysis, model compression, structured pruning, regularization BibRef

Hu, Y., Li, J., Long, X., Hu, S., Zhu, J., Wang, X., Gu, Q.,
Cluster Regularized Quantization for Deep Networks Compression,
ICIP19(914-918)
IEEE DOI 1910
deep neural networks, object classification, model compression, quantization BibRef

Hu, Y., Sun, S., Li, J., Zhu, J., Wang, X., Gu, Q.,
Multi-Loss-Aware Channel Pruning of Deep Networks,
ICIP19(889-893)
IEEE DOI 1910
deep neural networks, object classification, model compression, channel pruning BibRef

Mitschke, N., Heizmann, M., Noffz, K., Wittmann, R.,
A Fixed-Point Quantization Technique for Convolutional Neural Networks Based on Weight Scaling,
ICIP19(3836-3840)
IEEE DOI 1910
CNNs, Fixed Point Quantization, Image Processing, Machine Vision, Deep Learning BibRef

Choi, Y., Choi, J., Moon, H., Lee, J., Chang, J.,
Accelerating Framework for Simultaneous Optimization of Model Architectures and Training Hyperparameters,
ICIP19(3831-3835)
IEEE DOI 1910
Deep Learning, Model Hyperparameters BibRef

Zhe, W., Lin, J., Chandrasekhar, V., Girod, B.,
Optimizing the Bit Allocation for Compression of Weights and Activations of Deep Neural Networks,
ICIP19(3826-3830)
IEEE DOI 1910
Deep Learning, Coding, Compression BibRef

Lei, X., Liu, L., Zhou, Z., Sun, H., Zheng, N.,
Exploring Hardware Friendly Bottleneck Architecture in CNN for Embedded Computing Systems,
ICIP19(4180-4184)
IEEE DOI 1910
Lightweight/Mobile CNN model, Model optimization, Embedded System, Hardware Accelerating. BibRef

Geng, X.[Xue], Lin, J.[Jie], Zhao, B.[Bin], Kong, A.[Anmin], Aly, M.M.S.[Mohamed M. Sabry], Chandrasekhar, V.[Vijay],
Hardware-Aware Softmax Approximation for Deep Neural Networks,
ACCV18(IV:107-122).
Springer DOI 1906
BibRef

Chen, W.C.[Wei-Chun], Chang, C.C.[Chia-Che], Lee, C.R.[Che-Rung],
Knowledge Distillation with Feature Maps for Image Classification,
ACCV18(III:200-215).
Springer DOI 1906
BibRef

Groh, F.[Fabian], Wieschollek, P.[Patrick], Lensch, H.P.A.[Hendrik P. A.],
Flex-Convolution,
ACCV18(I:105-122).
Springer DOI 1906
BibRef

Yang, L.[Lu], Song, Q.[Qing], Li, Z.X.[Zuo-Xin], Wu, Y.Q.[Ying-Qi], Li, X.J.[Xiao-Jie], Hu, M.J.[Meng-Jie],
Cross Connected Network for Efficient Image Recognition,
ACCV18(I:56-71).
Springer DOI 1906
BibRef

Ignatov, A.[Andrey], Timofte, R.[Radu], Chou, W.[William], Wang, K.[Ke], Wu, M.[Max], Van Gool, T.H.L.J.[Tim Hartley Luc J.],
AI Benchmark: Running Deep Neural Networks on Android Smartphones,
PerceptualRest18(V:288-314).
Springer DOI 1905
BibRef

Li, X., Zhang, S., Jiang, B., Qi, Y., Chuah, M.C., Bi, N.,
DAC: Data-Free Automatic Acceleration of Convolutional Networks,
WACV19(1598-1606)
IEEE DOI 1904
convolutional neural nets, image classification, Internet of Things, learning (artificial intelligence), Deep learning BibRef

He, Y., Liu, X., Zhong, H., Ma, Y.,
AddressNet: Shift-Based Primitives for Efficient Convolutional Neural Networks,
WACV19(1213-1222)
IEEE DOI 1904
convolutional neural nets, coprocessors, learning (artificial intelligence), parallel algorithms, Fuses BibRef

Singh, P., Kadi, V.S.R., Verma, N., Namboodiri, V.P.,
Stability Based Filter Pruning for Accelerating Deep CNNs,
WACV19(1166-1174)
IEEE DOI 1904
computer networks, graphics processing units, learning (artificial intelligence), neural nets, Libraries BibRef

He, Z.Z.[Zhe-Zhi], Gong, B.Q.[Bo-Qing], Fan, D.L.[De-Liang],
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy,
WACV19(913-921)
IEEE DOI 1904
reduce to -1, 0, +1. convolutional neural nets, embedded systems, image classification, image coding, image representation, Hardware BibRef

Bicici, U.C.[Ufuk Can], Keskin, C.[Cem], Akarun, L.[Lale],
Conditional Information Gain Networks,
ICPR18(1390-1395)
IEEE DOI 1812
Decision trees, Neural networks, Computational modeling, Training, Routing, Vegetation, Probability distribution BibRef

Aldana, R.[Rodrigo], Campos-Macías, L.[Leobardo], Zamora, J.[Julio], Gomez-Gutierrez, D.[David], Cruz, A.[Adan],
Dynamic Learning Rate for Neural Networks: A Fixed-Time Stability Approach,
ICPR18(1378-1383)
IEEE DOI 1812
Training, Artificial neural networks, Approximation algorithms, Optimization, Pattern recognition, Heuristic algorithms, Lyapunov methods BibRef

Kung, H.T., McDanel, B., Zhang, S.Q.,
Adaptive Tiling: Applying Fixed-size Systolic Arrays To Sparse Convolutional Neural Networks,
ICPR18(1006-1011)
IEEE DOI 1812
Sparse matrices, Arrays, Convolution, Adaptive arrays, Microprocessors, Adaptation models BibRef

Grelsson, B., Felsberg, M.,
Improved Learning in Convolutional Neural Networks with Shifted Exponential Linear Units (ShELUs),
ICPR18(517-522)
IEEE DOI 1812
convolution, feedforward neural nets, learning (artificial intelligence). BibRef

Zheng, W., Zhang, Z.,
Accelerating the Classification of Very Deep Convolutional Network by A Cascading Approach,
ICPR18(355-360)
IEEE DOI 1812
computational complexity, convolution, entropy, feedforward neural nets, image classification, Measurement uncertainty BibRef

Manessi, F., Rozza, A., Bianco, S., Napoletano, P., Schettini, R.,
Automated Pruning for Deep Neural Network Compression,
ICPR18(657-664)
IEEE DOI 1812
Training, Neural networks, Quantization (signal), Task analysis, Feature extraction, Pipelines, Image coding BibRef

Zhong, G., Yao, H., Zhou, H.,
Merging Neurons for Structure Compression of Deep Networks,
ICPR18(1462-1467)
IEEE DOI 1812
Neurons, Neural networks, Merging, Computer architecture, Matrix decomposition, Mathematical model, Prototypes BibRef

Bhowmik, P.[Pankaj], Pantho, M.J.H.[M. Jubaer Hossain], Asadinia, M.[Marjan], Bobda, C.[Christophe],
Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor,
ECVW18(786-7868)
IEEE DOI 1812
Computer architecture, Image sensors, Visualization, Program processors, Clocks, Image processing BibRef

Aggarwal, V.[Vaneet], Wang, W.L.[Wen-Lin], Eriksson, B.[Brian], Sun, Y.F.[Yi-Fan], Wan, W.Q.[Wen-Qi],
Wide Compression: Tensor Ring Nets,
CVPR18(9329-9338)
IEEE DOI 1812
Neural networks, Image coding, Shape, Merging, Computer architecture BibRef

Saeedan, F.[Faraz], Weber, N.[Nicolas], Goesele, M.[Michael], Roth, S.[Stefan],
Detail-Preserving Pooling in Deep Networks,
CVPR18(9108-9116)
IEEE DOI 1812
Standards, Visualization, Convolutional neural networks, Task analysis, Feature extraction, Distortion, Adaptive systems BibRef

Yu, R., Li, A., Chen, C., Lai, J., Morariu, V.I., Han, X., Gao, M., Lin, C., Davis, L.S.,
NISP: Pruning Networks Using Neuron Importance Score Propagation,
CVPR18(9194-9203)
IEEE DOI 1812
Neurons, Redundancy, Optimization, Acceleration, Biological neural networks, Task analysis, Feature extraction BibRef

Ren, M.[Mengye], Pokrovsky, A.[Andrei], Yang, B.[Bin], Urtasun, R.[Raquel],
SBNet: Sparse Blocks Network for Fast Inference,
CVPR18(8711-8720)
IEEE DOI 1812
Convolution, Kernel, Shape, Object detection, Task analysis BibRef

Xie, G.T.[Guo-Tian], Wang, J.D.[Jing-Dong], Zhang, T.[Ting], Lai, J.H.[Jian-Huang], Hong, R.[Richang], Qi, G.J.[Guo-Jun],
Interleaved Structured Sparse Convolutional Neural Networks,
CVPR18(8847-8856)
IEEE DOI 1812
Convolution, Kernel, Sparse matrices, Redundancy, Computational modeling, Computer architecture, Computational complexity BibRef

Kim, E.[Eunwoo], Ahn, C.[Chanho], Oh, S.[Songhwai],
NestedNet: Learning Nested Sparse Structures in Deep Neural Networks,
CVPR18(8669-8678)
IEEE DOI 1812
Task analysis, Knowledge engineering, Neural networks, Computer architecture, Optimization, Redundancy BibRef

Carreira-Perpinan, M.A., Idelbayev, Y.,
'Learning-Compression' Algorithms for Neural Net Pruning,
CVPR18(8532-8541)
IEEE DOI 1812
Neural networks, Optimization, Training, Neurons, Performance evaluation, Mobile handsets, Quantization (signal) BibRef

Tung, F., Mori, G.,
CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization,
CVPR18(7873-7882)
IEEE DOI 1812
Quantization (signal), Training, Neural networks, Visualization, Image coding, Task analysis, Optimization BibRef

Bulò, S.R.[Samuel Rota], Porzi, L.[Lorenzo], Kontschieder, P.[Peter],
In-place Activated BatchNorm for Memory-Optimized Training of DNNs,
CVPR18(5639-5647)
IEEE DOI 1812
Reduce memory needs. Training, Buffer storage, Checkpointing, Memory management, Standards, Semantics BibRef

Ma, W., Wu, Y., Wang, Z., Wang, G.,
MDCN: Multi-Scale, Deep Inception Convolutional Neural Networks for Efficient Object Detection,
ICPR18(2510-2515)
IEEE DOI 1812
Feature extraction, Object detection, Computational modeling, Task analysis, Convolutional neural networks, Hardware, Real-time systems BibRef

Zhang, D.,
clcNet: Improving the Efficiency of Convolutional Neural Network Using Channel Local Convolutions,
CVPR18(7912-7919)
IEEE DOI 1812
Kernel, Computational modeling, Computational efficiency, Convolutional neural networks, Stacking, Computer vision BibRef

Zhuang, B., Shen, C., Tan, M., Liu, L., Reid, I.D.,
Towards Effective Low-Bitwidth Convolutional Neural Networks,
CVPR18(7920-7928)
IEEE DOI 1812
Quantization (signal), Training, Neural networks, Optimization, Zirconium, Hardware, Convolution BibRef

Kuen, J., Kong, X., Lin, Z., Wang, G., Yin, J., See, S., Tan, Y.,
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks,
CVPR18(7929-7938)
IEEE DOI 1812
Training, Computational modeling, Computational efficiency, Stochastic processes, Visualization, Network architecture, Computer vision BibRef

Shazeer, N., Fatahalian, K., Mark, W.R., Mullapudi, R.T.,
HydraNets: Specialized Dynamic Architectures for Efficient Inference,
CVPR18(8080-8089)
IEEE DOI 1812
Computer architecture, Training, Computational modeling, Task analysis, Computational efficiency, Optimization, Routing BibRef

Rebuffi, S., Vedaldi, A., Bilen, H.,
Efficient Parametrization of Multi-domain Deep Neural Networks,
CVPR18(8119-8127)
IEEE DOI 1812
Task analysis, Neural networks, Adaptation models, Feature extraction, Visualization, Computational modeling, Standards BibRef

Cao, S.[Sen], Liu, Y.Z.[Ya-Zhou], Zhou, C.X.[Chang-Xin], Sun, Q.S.[Quan-Sen], Pongsak, L.S.[La-Sang], Shen, S.M.[Sheng Mei],
ThinNet: An Efficient Convolutional Neural Network for Object Detection,
ICPR18(836-841)
IEEE DOI 1812
Convolution, Computational modeling, Object detection, Neural networks, Computer architecture, Training, ThinNet BibRef

Kobayashi, T.,
Analyzing Filters Toward Efficient ConvNet,
CVPR18(5619-5628)
IEEE DOI 1812
Convolution, Feature extraction, Neurons, Image reconstruction, Visualization, Shape, Computer vision BibRef

Chou, Y., Chan, Y., Lee, J., Chiu, C., Chen, C.,
Merging Deep Neural Networks for Mobile Devices,
EfficientDeep18(1767-17678)
IEEE DOI 1812
Task analysis, Convolution, Merging, Computational modeling, Neural networks, Kernel, Computer architecture BibRef

Zhang, Q., Zhang, M., Wang, M., Sui, W., Meng, C., Yang, J., Kong, W., Cui, X., Lin, W.,
Efficient Deep Learning Inference Based on Model Compression,
EfficientDeep18(1776-17767)
IEEE DOI 1812
Computational modeling, Convolution, Adaptation models, Image edge detection, Quantization (signal), Kernel BibRef

Faraone, J., Fraser, N., Blott, M., Leong, P.H.W.,
SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks,
CVPR18(4300-4309)
IEEE DOI 1812
Quantization (signal), Hardware, Symmetric matrices, Training, Complexity theory, Neural networks, Field programmable gate arrays BibRef

Ma, N.N.[Ning-Ning], Zhang, X.Y.[Xiang-Yu], Zheng, H.T.[Hai-Tao], Sun, J.[Jian],
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,
ECCV18(XIV: 122-138).
Springer DOI 1810
BibRef

Zhang, X.Y.[Xiang-Yu], Zhou, X., Lin, M., Sun, J.,
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,
CVPR18(6848-6856)
IEEE DOI 1812
Convolution, Complexity theory, Computer architecture, Mobile handsets, Computational modeling, Task analysis, Neural networks BibRef

Prabhu, A.[Ameya], Varma, G.[Girish], Namboodiri, A.[Anoop],
Deep Expander Networks: Efficient Deep Networks from Graph Theory,
ECCV18(XIII: 20-36).
Springer DOI 1810
BibRef

Freeman, I., Roese-Koerner, L., Kummert, A.,
Effnet: An Efficient Structure for Convolutional Neural Networks,
ICIP18(6-10)
IEEE DOI 1809
Convolution, Computational modeling, Optimization, Hardware, Kernel, Data compression, Convolutional neural networks, real-time inference BibRef

Zhou, Z., Zhou, W., Li, H., Hong, R.,
Online Filter Clustering and Pruning for Efficient Convnets,
ICIP18(11-15)
IEEE DOI 1809
Training, Acceleration, Neural networks, Convolution, Tensile stress, Force, Clustering algorithms, Deep neural networks, similar filter, cluster loss BibRef

Elordi, U.[Unai], Unzueta, L.[Luis], Arganda-Carreras, I.[Ignacio], Otaegui, O.[Oihana],
How Can Deep Neural Networks Be Generated Efficiently for Devices with Limited Resources?,
AMDO18(24-33).
Springer DOI 1807
BibRef

Prabhu, A.[Ameya], Batchu, V.[Vishal], Gajawada, R.[Rohit], Munagala, S.A.[Sri Aurobindo], Namboodiri, A.[Anoop],
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory,
WACV18(821-829)
IEEE DOI 1806
approximation theory, data compression, image classification, image coding, image representation, Quantization (signal) BibRef

Lee, T.K.[Tae Kwan], Baddar, W.J.[Wissam J.], Kim, S.T.[Seong Tae], Ro, Y.M.[Yong Man],
Convolution with Logarithmic Filter Groups for Efficient Shallow CNN,
MMMod18(I:117-129).
Springer DOI 1802
filter grouping in convolution layers. BibRef

Véniat, T.[Tom], Denoyer, L.[Ludovic],
Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks,
CVPR18(3492-3500)
IEEE DOI 1812
Computational modeling, Computer architecture, Stochastic processes, Neural networks, Fabrics, Predictive models, Computer vision BibRef

Huang, G.[Gao], Liu, Z.[Zhuang], van der Maaten, L.[Laurens], Weinberger, K.Q.[Kilian Q.],
Densely Connected Convolutional Networks,
CVPR17(2261-2269)
IEEE DOI 1711
Award, CVPR. Convolution, Convolutional codes, Network architecture, Neural networks, Road transportation, Training BibRef

Huang, G.[Gao], Sun, Y.[Yu], Liu, Z.[Zhuang], Sedra, D.[Daniel], Weinberger, K.Q.[Kilian Q.],
Deep Networks with Stochastic Depth,
ECCV16(IV: 646-661).
Springer DOI 1611
BibRef

Huang, G.[Gao], Liu, S.C.[Shi-Chen], van der Maaten, L.[Laurens], Weinberger, K.Q.[Kilian Q.],
CondenseNet: An Efficient DenseNet Using Learned Group Convolutions,
CVPR18(2752-2761)
IEEE DOI 1812
CNN on a phone. Training, Computer architecture, Computational modeling, Standards, Mobile handsets, Network architecture, Indexes BibRef

Zhao, G., Zhang, Z., Guan, H., Tang, P., Wang, J.,
Rethinking ReLU to Train Better CNNs,
ICPR18(603-608)
IEEE DOI 1812
Convolution, Tensile stress, Network architecture, Computational efficiency, Computational modeling, Pattern recognition BibRef

Chan, M., Scarafoni, D., Duarte, R., Thornton, J., Skelly, L.,
Learning Network Architectures of Deep CNNs Under Resource Constraints,
EfficientDeep18(1784-17847)
IEEE DOI 1812
Computer architecture, Computational modeling, Optimization, Adaptation models, Network architecture, Linear programming, Training BibRef

Kuen, J.[Jason], Kong, X.F.[Xiang-Fei], Wang, G.[Gang], Tan, Y.P.[Yap-Peng],
DelugeNets: Deep Networks with Efficient and Flexible Cross-Layer Information Inflows,
CEFR-LCV17(958-966)
IEEE DOI 1802
Complexity theory, Computational modeling, Convolution, Correlation, Neural networks BibRef

Singh, A., Kingsbury, N.G.,
Efficient Convolutional Network Learning Using Parametric Log Based Dual-Tree Wavelet ScatterNet,
CEFR-LCV17(1140-1147)
IEEE DOI 1802
Computer architecture, Feature extraction, Personal area networks, Standards, Training BibRef

Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.,
Learning Efficient Convolutional Networks through Network Slimming,
ICCV17(2755-2763)
IEEE DOI 1802
convolution, image classification, learning (artificial intelligence), neural nets, CNNs, Training BibRef

Yang, T.J.[Tien-Ju], Chen, Y.H.[Yu-Hsin], Sze, V.[Vivienne],
Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning,
CVPR17(6071-6079)
IEEE DOI 1711
Computational modeling, Energy consumption, Estimation, Hardware, Measurement, Memory management, Smart, phones BibRef

Ioannou, Y., Robertson, D., Cipolla, R., Criminisi, A.,
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups,
CVPR17(5977-5986)
IEEE DOI 1711
Computational complexity, Computational modeling, Computer architecture, Convolution, Graphics processing units, Neural networks, Training BibRef

Guo, J.[Jia], Potkonjak, M.[Miodrag],
Pruning ConvNets Online for Efficient Specialist Models,
ECVW17(430-437)
IEEE DOI 1709
Biological neural networks, Computational modeling, Computer vision, Convolution, Memory management, Sensitivity, analysis BibRef

Lin, J.H., Xing, T., Zhao, R., Zhang, Z., Srivastava, M., Tu, Z., Gupta, R.K.,
Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration,
ECVW17(344-352)
IEEE DOI 1709
Backpropagation, Convolution, Field programmable gate arrays, Filtering theory, Hardware, Kernel, Training BibRef

Zhang, X., Li, Z., Loy, C.C., Lin, D.,
PolyNet: A Pursuit of Structural Diversity in Very Deep Networks,
CVPR17(3900-3908)
IEEE DOI 1711
Agriculture, Benchmark testing, Computational efficiency, Diversity reception, Network architecture, Systematics, Training BibRef

Yan, S.,
Keynotes: Deep learning for visual understanding: Effectiveness vs. efficiency,
VCIP16(1-1)
IEEE DOI 1701
BibRef

Karmakar, P., Teng, S.W., Zhang, D., Liu, Y., Lu, G.,
Improved Tamura Features for Image Classification Using Kernel Based Descriptors,
DICTA17(1-7)
IEEE DOI 1804
BibRef
And:
Improved Kernel Descriptors for Effective and Efficient Image Classification,
DICTA17(1-8)
IEEE DOI 1804
BibRef
Earlier:
Combining Pyramid Match Kernel and Spatial Pyramid for Image Classification,
DICTA16(1-8)
IEEE DOI 1701
Gabor filters, image colour analysis, image segmentation. feature extraction, image classification, image colour analysis, image representation, effective image classification, BibRef

Karmakar, P., Teng, S.W., Lu, G., Zhang, D.,
Rotation Invariant Spatial Pyramid Matching for Image Classification,
DICTA15(1-8)
IEEE DOI 1603
image classification BibRef

Opitz, M.[Michael], Possegger, H.[Horst], Bischof, H.[Horst],
Efficient Model Averaging for Deep Neural Networks,
ACCV16(II: 205-220).
Springer DOI 1704
BibRef

Zhang, Z.M.[Zi-Ming], Chen, Y.T.[Yu-Ting], Saligrama, V.[Venkatesh],
Efficient Training of Very Deep Neural Networks for Supervised Hashing,
CVPR16(1487-1495)
IEEE DOI 1612
BibRef

Smith, L.N.,
Cyclical Learning Rates for Training Neural Networks,
WACV17(464-472)
IEEE DOI 1609
Computational efficiency, Computer architecture, Neural networks, Schedules, Training, Tuning BibRef

Smith, L.N., Hand, E.M., Doster, T.,
Gradual DropIn of Layers to Train Very Deep Neural Networks,
CVPR16(4763-4771)
IEEE DOI 1612
BibRef

Pasquet, J., Chaumont, M., Subsol, G., Derras, M.,
Speeding-up a convolutional neural network by connecting an SVM network,
ICIP16(2286-2290)
IEEE DOI 1610
Computational efficiency BibRef

Park, W.S., Kim, M.,
CNN-based in-loop filtering for coding efficiency improvement,
IVMSP16(1-5)
IEEE DOI 1608
Convolution BibRef

Moons, B.[Bert], de Brabandere, B.[Bert], Van Gool, L.J.[Luc J.], Verhelst, M.[Marian],
Energy-efficient ConvNets through approximate computing,
WACV16(1-8)
IEEE DOI 1606
Approximation algorithms BibRef

Hsu, F.C., Gubbi, J., Palaniswami, M.,
Learning Efficiently- The Deep CNNs-Tree Network,
DICTA15(1-7)
IEEE DOI 1603
learning (artificial intelligence) BibRef

Highlander, T.[Tyler], Rodriguez, A.[Andres],
Very Efficient Training of Convolutional Neural Networks using Fast Fourier Transform and Overlap-and-Add,
BMVC15(xx-yy).
DOI Link 1601
BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Forgetting, Explaination, Intrepretation, Understanding of Convolutional Neural Networks .


Last update:Dec 4, 2019 at 17:24:08