Zhang, X.Y.[Xiang-Yu],
Zou, J.H.[Jian-Hua],
He, K.M.[Kai-Ming],
Sun, J.[Jian],
Accelerating Very Deep Convolutional Networks for Classification and
Detection,
PAMI(38), No. 10, October 2016, pp. 1943-1955.
IEEE DOI
1609
Acceleration
BibRef
He, Y.,
Zhang, X.Y.[Xiang-Yu],
Sun, J.[Jian],
Channel Pruning for Accelerating Very Deep Neural Networks,
ICCV17(1398-1406)
IEEE DOI
1802
iterative methods, learning (artificial intelligence),
least squares approximations, neural nets, regression analysis,
Training
BibRef
Zhang, X.Y.[Xiang-Yu],
Zou, J.H.[Jian-Hua],
Ming, X.[Xiang],
He, K.M.[Kai-Ming],
Sun, J.[Jian],
Efficient and accurate approximations of nonlinear convolutional
networks,
CVPR15(1984-1992)
IEEE DOI
1510
BibRef
He, K.M.[Kai-Ming],
Zhang, X.Y.[Xiang-Yu],
Ren, S.Q.[Shao-Qing],
Sun, J.[Jian],
Delving Deep into Rectifiers:
Surpassing Human-Level Performance on ImageNet Classification,
ICCV15(1026-1034)
IEEE DOI
1602
Adaptation models
BibRef
Sze, V.,
Chen, Y.H.,
Yang, T.J.,
Emer, J.S.,
Efficient Processing of Deep Neural Networks: A Tutorial and Survey,
PIEEE(105), No. 12, December 2017, pp. 2295-2329.
IEEE DOI
1712
Survey, Deep Neural Networks. Artificial intelligence, Benchmark testing,
Biological neural networks,
spatial architectures
BibRef
Cavigelli, L.,
Benini, L.,
Origami: A 803-GOp/s/W Convolutional Network Accelerator,
CirSysVideo(27), No. 11, November 2017, pp. 2461-2475.
IEEE DOI
1712
Feature extraction,
Machine learning, Mobile communication, Neural networks,
very large scale integration
BibRef
Cavigelli, L.,
Benini, L.,
CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional
Network Inference on Video Streams,
CirSysVideo(30), No. 5, May 2020, pp. 1451-1465.
IEEE DOI
2005
Learning with video.
Feature extraction, Object detection, Throughput, Convolution,
Inference algorithms, Semantics, Approximation algorithms,
object detection
BibRef
Ghesu, F.C.[Florin C.],
Krubasik, E.,
Georgescu, B.,
Singh, V.,
Zheng, Y.,
Hornegger, J.[Joachim],
Comaniciu, D.,
Marginal Space Deep Learning: Efficient Architecture for Volumetric
Image Parsing,
MedImg(35), No. 5, May 2016, pp. 1217-1228.
IEEE DOI
1605
Context
BibRef
Revathi, A.R.,
Kumar, D.[Dhananjay],
An efficient system for anomaly detection using deep learning
classifier,
SIViP(11), No. 2, February 2017, pp. 291-299.
WWW Link.
1702
BibRef
Sun, B.,
Feng, H.,
Efficient Compressed Sensing for Wireless Neural Recording:
A Deep Learning Approach,
SPLetters(24), No. 6, June 2017, pp. 863-867.
IEEE DOI
1705
Compressed sensing, Cost function, Dictionaries, Sensors, Training,
Wireless communication, Wireless sensor networks,
Compressed sensing (CS), deep neural network, wireless neural recording
BibRef
Xu, T.B.[Ting-Bing],
Yang, P.P.[Pei-Pei],
Zhang, X.Y.[Xu-Yao],
Liu, C.L.[Cheng-Lin],
LightweightNet: Toward fast and lightweight convolutional neural
networks via architecture distillation,
PR(88), 2019, pp. 272-284.
Elsevier DOI
1901
Deep network acceleration and compression,
Architecture distillation, Lightweight network
BibRef
Kim, D.H.[Dae Ha],
Lee, M.K.[Min Kyu],
Lee, S.H.[Seung Hyun],
Song, B.C.[Byung Cheol],
Macro unit-based convolutional neural network for very light-weight
deep learning,
IVC(87), 2019, pp. 68-75.
Elsevier DOI
1906
BibRef
Earlier: A1, A3, A4, Only:
MUNet: Macro Unit-Based Convolutional Neural Network for Mobile
Devices,
EfficientDeep18(1749-17498)
IEEE DOI
1812
Deep neural networks, Light-weight deep learning, Macro-unit.
Convolution, Computational complexity,
Mobile handsets, Neural networks, Performance evaluation
BibRef
Zhang, C.Y.[Chun-Yang],
Zhao, Q.[Qi],
Chen, C.L.P.[C.L. Philip],
Liu, W.X.[Wen-Xi],
Deep compression of probabilistic graphical networks,
PR(96), 2019, pp. 106979.
Elsevier DOI
1909
Deep compression, Probabilistic graphical models,
Probabilistic graphical networks, Deep learning
BibRef
Brillet, L.F.,
Mancini, S.,
Cleyet-Merle, S.,
Nicolas, M.,
Tunable CNN Compression Through Dimensionality Reduction,
ICIP19(3851-3855)
IEEE DOI
1910
CNN, PCA, compression
BibRef
Lin, S.H.[Shao-Hui],
Ji, R.R.[Rong-Rong],
Chen, C.[Chao],
Tao, D.C.[Da-Cheng],
Luo, J.B.[Jie-Bo],
Holistic CNN Compression via Low-Rank Decomposition with Knowledge
Transfer,
PAMI(41), No. 12, December 2019, pp. 2889-2905.
IEEE DOI
1911
Knowledge transfer, Image coding, Task analysis,
Information exchange, Computational modeling, CNN acceleration
BibRef
Zhang, L.[Lin],
Bu, X.K.[Xiao-Kang],
Li, B.[Bing],
XNORCONV: CNNs accelerator implemented on FPGA using a hybrid CNNs
structure and an inter-layer pipeline method,
IET-IPR(14), No. 1, January 2020, pp. 105-113.
DOI Link
1912
BibRef
Chen, Z.,
Fan, K.,
Wang, S.,
Duan, L.,
Lin, W.,
Kot, A.C.,
Toward Intelligent Sensing: Intermediate Deep Feature Compression,
IP(29), 2020, pp. 2230-2243.
IEEE DOI
2001
Visualization, Image coding, Task analysis, Feature extraction,
Deep learning, Video coding, Standardization, Deep learning,
feature compression
BibRef
Lobel, H.[Hans],
Vidal, R.[René],
Soto, A.[Alvaro],
CompactNets: Compact Hierarchical Compositional Networks for Visual
Recognition,
CVIU(191), 2020, pp. 102841.
Elsevier DOI
2002
Deep learning, Regularization, Group sparsity, Image categorization
BibRef
Ding, L.[Lin],
Tian, Y.H.[Yong-Hong],
Fan, H.F.[Hong-Fei],
Chen, C.H.[Chang-Huai],
Huang, T.J.[Tie-Jun],
Joint Coding of Local and Global Deep Features in Videos for Visual
Search,
IP(29), 2020, pp. 3734-3749.
IEEE DOI
2002
Local deep feature, joint coding, visual search, inter-feature correlation
BibRef
Browne, D.[David],
Giering, M.[Michael],
Prestwich, S.[Steven],
PulseNetOne: Fast Unsupervised Pruning of Convolutional Neural
Networks for Remote Sensing,
RS(12), No. 7, 2020, pp. xx-yy.
DOI Link
2004
BibRef
Liu, Z.C.[Ze-Chun],
Luo, W.H.[Wen-Han],
Wu, B.Y.[Bao-Yuan],
Yang, X.[Xin],
Liu, W.[Wei],
Cheng, K.T.[Kwang-Ting],
Bi-Real Net: Binarizing Deep Network Towards Real-Network Performance,
IJCV(128), No. 1, January 2020, pp. 202-219.
Springer DOI
2002
BibRef
Earlier: A1, A3, A2, A4, A5, A6:
Bi-Real Net: Enhancing the Performance of 1-Bit CNNs with Improved
Representational Capability and Advanced Training Algorithm,
ECCV18(XV: 747-763).
Springer DOI
1810
BibRef
Sun, F.Z.[Feng-Zhen],
Li, S.J.[Shao-Jie],
Wang, S.H.[Shao-Hua],
Liu, Q.J.[Qing-Jun],
Zhou, L.X.[Li-Xin],
CostNet: A Concise Overpass Spatiotemporal Network for Predictive
Learning,
IJGI(9), No. 4, 2020, pp. xx-yy.
DOI Link
2005
ResNet to deal with temporal.
BibRef
Kalayeh, M.M.[Mahdi M.],
Shah, M.[Mubarak],
Training Faster by Separating Modes of Variation in Batch-Normalized
Models,
PAMI(42), No. 6, June 2020, pp. 1483-1500.
IEEE DOI
2005
Training, Kernel, Mathematical model, Transforms,
Probability density function, Statistics, Acceleration, fisher vector
BibRef
Ma, W.C.[Wen-Chi],
Wu, Y.W.[Yuan-Wei],
Cen, F.[Feng],
Wang, G.H.[Guang-Hui],
MDFN: Multi-scale deep feature learning network for object detection,
PR(100), 2020, pp. 107149.
Elsevier DOI
2005
Deep feature learning, Multi-scale,
Semantic and contextual information, Small and occluded objects
BibRef
Patel, K.[Krushi],
Wang, G.H.[Guang-Hui],
A discriminative channel diversification network for image
classification,
PRL(153), 2022, pp. 176-182.
Elsevier DOI
2201
Image classification, Discriminative features, Channel attention mechanism
BibRef
Ma, W.C.[Wen-Chi],
Wu, Y.W.[Yuan-Wei],
Wang, Z.,
Wang, G.H.[Guang-Hui],
MDCN: Multi-Scale, Deep Inception Convolutional Neural Networks for
Efficient Object Detection,
ICPR18(2510-2515)
IEEE DOI
1812
Feature extraction, Object detection, Computational modeling,
Task analysis, Convolutional neural networks, Hardware, Real-time systems
BibRef
Lelekas, I.,
Tomen, N.,
Pintea, S.L.,
van Gemert, J.C.,
Top-Down Networks: A coarse-to-fine reimagination of CNNs,
DeepVision20(3244-3253)
IEEE DOI
2008
Feature extraction, Spatial resolution,
Merging, Task analysis, Visualization, Robustness
BibRef
Ma, L.H.[Long-Hua],
Fan, H.Y.[Hang-Yu],
Lu, Z.M.[Zhe-Ming],
Tian, D.[Dong],
Acceleration of multi-task cascaded convolutional networks,
IET-IPR(14), No. 11, September 2020, pp. 2435-2441.
DOI Link
2009
BibRef
Jiang, Y.G.[Yu-Gang],
Cheng, C.M.[Chang-Mao],
Lin, H.Y.[Hang-Yu],
Fu, Y.W.[Yan-Wei],
Learning Layer-Skippable Inference Network,
IP(29), 2020, pp. 8747-8759.
IEEE DOI
2009
Task analysis, Visualization, Computational modeling,
Biological information theory, Computational efficiency, Neurons,
neural networks
BibRef
Fang, Z.Y.[Zhen-Yu],
Ren, J.C.[Jin-Chang],
Marshall, S.[Stephen],
Zhao, H.M.[Hui-Min],
Wang, S.[Song],
Li, X.L.[Xue-Long],
Topological optimization of the DenseNet with pretrained-weights
inheritance and genetic channel selection,
PR(109), 2021, pp. 107608.
Elsevier DOI
2009
Deep convolutional neural networks, Genetic algorithms,
Parameter reduction, Structure optimization, DenseNet
BibRef
Li, G.Q.[Guo-Qing],
Zhang, M.[Meng],
Li, J.[Jiaojie],
Lv, F.[Feng],
Tong, G.D.[Guo-Dong],
Efficient densely connected convolutional neural networks,
PR(109), 2021, pp. 107610.
Elsevier DOI
2009
Convolutional neural networks, Classification,
Parameter efficiency, Densely connected
BibRef
Yang, Y.Q.[Yong-Quan],
Lv, H.J.[Hai-Jun],
Chen, N.[Ning],
Wu, Y.[Yang],
Zheng, J.Y.[Jia-Yi],
Zheng, Z.X.[Zhong-Xi],
Local minima found in the subparameter space can be effective for
ensembles of deep convolutional neural networks,
PR(109), 2021, pp. 107582.
Elsevier DOI
2009
Ensemble learning, Ensemble selection, Ensemble fusion,
Deep convolutional neural network
BibRef
Gürhanli, A.[Ahmet],
Accelerating convolutional neural network training using ProMoD
backpropagation algorithm,
IET-IPR(14), No. 13, November 2020, pp. 2957-2964.
DOI Link
2012
BibRef
Xi, J.B.[Jiang-Bo],
Ersoy, O.K.[Okan K.],
Fang, J.W.[Jian-Wu],
Cong, M.[Ming],
Wu, T.J.[Tian-Jun],
Zhao, C.Y.[Chao-Ying],
Li, Z.H.[Zhen-Hong],
Wide Sliding Window and Subsampling Network for Hyperspectral Image
Classification,
RS(13), No. 7, 2021, pp. xx-yy.
DOI Link
2104
BibRef
Xi, J.B.[Jiang-Bo],
Cong, M.[Ming],
Ersoy, O.K.[Okan K.],
Zou, W.B.[Wei-Bao],
Zhao, C.Y.[Chao-Ying],
Li, Z.H.[Zhen-Hong],
Gu, J.K.[Jun-Kai],
Wu, T.J.[Tian-Jun],
Dynamic Wide and Deep Neural Network for Hyperspectral Image
Classification,
RS(13), No. 13, 2021, pp. xx-yy.
DOI Link
2107
BibRef
Xi, J.B.[Jiang-Bo],
Ersoy, O.K.[Okan K.],
Cong, M.[Ming],
Zhao, C.Y.[Chao-Ying],
Qu, W.[Wei],
Wu, T.J.[Tian-Jun],
Wide and Deep Fourier Neural Network for Hyperspectral Remote Sensing
Image Classification,
RS(14), No. 12, 2022, pp. xx-yy.
DOI Link
2206
BibRef
Avola, D.[Danilo],
Cinque, L.[Luigi],
Diko, A.[Anxhelo],
Fagioli, A.[Alessio],
Foresti, G.L.[Gian Luca],
Mecca, A.[Alessio],
Pannone, D.[Daniele],
Piciarelli, C.[Claudio],
MS-Faster R-CNN: Multi-Stream Backbone for Improved Faster R-CNN
Object Detection and Aerial Tracking from UAV Images,
RS(13), No. 9, 2021, pp. xx-yy.
DOI Link
2105
BibRef
Cancela, B.[Brais],
Bolón-Canedo, V.[Verónica],
Alonso-Betanzos, A.[Amparo],
E2E-FS: An End-to-End Feature Selection Method for Neural Networks,
PAMI(45), No. 7, July 2023, pp. 8311-8323.
IEEE DOI
2306
Feature extraction, Training, Convergence, Memory management, Force,
Computational modeling, Computational efficiency,
non-convex problem
BibRef
Cancela, B.[Brais],
Bolón-Canedo, V.[Verónica],
Alonso-Betanzos, A.[Amparo],
Can data placement be effective for Neural Networks classification
tasks? Introducing the Orthogonal Loss,
ICPR21(392-399)
IEEE DOI
2105
Training, Neural networks, Tools,
Classification algorithms, Proposals, Task analysis
BibRef
Jie, Z.Q.[Ze-Qun],
Sun, P.[Peng],
Li, X.[Xin],
Feng, J.S.[Jia-Shi],
Liu, W.[Wei],
Anytime Recognition with Routing Convolutional Networks,
PAMI(43), No. 6, June 2021, pp. 1875-1886.
IEEE DOI
2106
Routing, Neural networks, Task analysis, Benchmark testing,
Computational modeling, Reinforcement learning,
semantic segmentation
BibRef
Koçanaogullari, A.[Aziz],
Smedemark-Margulies, N.[Niklas],
Akcakaya, M.[Murat],
Erdogmus, D.[Deniz],
Geometric Analysis of Uncertainty Sampling for Dense Neural Network
Layer,
SPLetters(28), 2021, pp. 867-871.
IEEE DOI
2106
Uncertainty, Sampling methods, Adaptation models, Neural networks,
Training, Geometry, Task analysis, Active learning,
uncertainty sampling
BibRef
Paoletti, M.E.[Mercedes E.],
Haut, J.M.[Juan M.],
Tao, X.[Xuanwen],
Plaza, J.[Javier],
Plaza, A.[Antonio],
FLOP-Reduction Through Memory Allocations Within CNN for
Hyperspectral Image Classification,
GeoRS(59), No. 7, July 2021, pp. 5938-5952.
IEEE DOI
2106
Computational modeling, Convolution, Data models,
Feature extraction, Kernel, Hyperspectral imaging, Classification,
shift operation
BibRef
Liu, X.Y.[Xin-Yu],
Di, X.G.[Xiao-Guang],
TanhExp: A smooth activation function with high convergence speed for
lightweight neural networks,
IET-CV(15), No. 2, 2021, pp. 136-150.
DOI Link
2106
BibRef
Gao, H.Y.[Hong-Yang],
Wang, Z.Y.[Zheng-Yang],
Cai, L.[Lei],
Ji, S.W.[Shui-Wang],
ChannelNets: Compact and Efficient Convolutional Neural Networks via
Channel-Wise Convolutions,
PAMI(43), No. 8, August 2021, pp. 2570-2581.
IEEE DOI
2107
Convolutional codes, Image coding, Computational modeling, Kernel,
Computational efficiency, Mobile handsets,
model compression
BibRef
Hou, Y.[Yun],
Fan, H.[Hong],
Li, L.[Li],
Li, B.L.[Bai-Lin],
Adaptive learning cost-sensitive convolutional neural network,
IET-CV(15), No. 5, 2021, pp. 346-355.
DOI Link
2107
BibRef
Qiu, J.X.[Jia-Xiong],
Chen, C.[Cai],
Liu, S.C.[Shuai-Cheng],
Zhang, H.Y.[Heng-Yu],
Zeng, B.[Bing],
SlimConv: Reducing Channel Redundancy in Convolutional Neural
Networks by Features Recombining,
IP(30), 2021, pp. 6434-6445.
IEEE DOI
2107
Convolution, Computational modeling, Redundancy, Task analysis,
Kernel, Image reconstruction, Transforms, Slim convolution,
model compression
BibRef
Li, Z.Z.[Zheng-Ze],
Yang, X.Y.[Xiao-Yuan],
Shen, K.Q.[Kang-Qing],
Jiang, F.[Fazhen],
Jiang, J.[Jin],
Ren, H.[Huwei],
Li, Y.X.[Yi-Xiao],
PSGU: Parametric self-circulation gating unit for deep neural
networks,
JVCIR(80), 2021, pp. 103294.
Elsevier DOI
2110
Deep learning, Neural network, Activation function, PSGU, Initialization
BibRef
Han, Y.Z.[Yi-Zeng],
Huang, G.[Gao],
Song, S.J.[Shi-Ji],
Yang, L.[Le],
Zhang, Y.T.[Yi-Tian],
Jiang, H.J.[Hao-Jun],
Spatially Adaptive Feature Refinement for Efficient Inference,
IP(30), 2021, pp. 9345-9358.
IEEE DOI
2112
Convolution, Spatial resolution, Redundancy, Adaptation models,
Computational modeling, Adaptive systems, Task analysis,
convolutional neural networks
BibRef
Gong, S.J.[Shen-Jian],
Zhang, S.S.[Shan-Shan],
Yang, J.[Jian],
Yuen, P.C.[Pong Chi],
Self-Fusion Convolutional Neural Networks,
PRL(152), 2021, pp. 50-55.
Elsevier DOI
2112
Lightweight neural networks, Efficient feature fusion, Image classification
BibRef
Mehta, S.[Sachin],
Hajishirzi, H.[Hannaneh],
Rastegari, M.[Mohammad],
DiCENet: Dimension-Wise Convolutions for Efficient Networks,
PAMI(44), No. 5, May 2022, pp. 2416-2425.
IEEE DOI
2204
Tensors, Standards, Kernel,
Convolutional codes, Task analysis, Object detection,
efficient networks
BibRef
Mehta, S.[Sachin],
Rastegari, M.[Mohammad],
Shapiro, L.[Linda],
Hajishirzi, H.[Hannaneh],
ESPNetv2: A Light-Weight, Power Efficient, and General Purpose
Convolutional Neural Network,
CVPR19(9182-9192).
IEEE DOI
2002
BibRef
Han, K.[Kai],
Wang, Y.H.[Yun-He],
Xu, C.[Chang],
Guo, J.Y.[Jian-Yuan],
Xu, C.J.[Chun-Jing],
Wu, E.[Enhua],
Tian, Q.[Qi],
GhostNets on Heterogeneous Devices via Cheap Operations,
IJCV(130), No. 1, January 2022, pp. 1050-1069.
Springer DOI
2204
Code, Neural Networks.
WWW Link.
WWW Link.
BibRef
Schonsheck, S.C.[Stefan C.],
Dong, B.[Bin],
Lai, R.J.[Rong-Jie],
Parallel Transport Convolution:
Deformable Convolutional Networks on Manifold-Structured Data,
SIIMS(15), No. 1, 2022, pp. 367-386.
DOI Link
2204
Generalizing convolutions on three-dimensional surfaces.
Aid in implementing wavelet and CNN on surfaces.
BibRef
Han, K.[Kai],
Wang, Y.H.[Yun-He],
Xu, C.[Chang],
Xu, C.J.[Chun-Jing],
Wu, E.[Enhua],
Tao, D.C.[Da-Cheng],
Learning Versatile Convolution Filters for Efficient Visual
Recognition,
PAMI(44), No. 11, November 2022, pp. 7731-7746.
IEEE DOI
2210
Convolution, Neural networks, Quantization (signal),
Computational modeling, Convolutional neural networks, versatile filters
BibRef
Liu, C.[Chang],
Zhang, X.S.[Xi-Shan],
Zhang, R.[Rui],
Li, L.[Ling],
Zhou, S.Y.[Shi-Yi],
Huang, D.[Di],
Li, Z.[Zhen],
Du, Z.D.[Zi-Dong],
Liu, S.L.[Shao-Li],
Chen, T.S.[Tian-Shi],
Rethinking the Importance of Quantization Bias, Toward Full Low-Bit
Training,
IP(31), 2022, pp. 7006-7019.
IEEE DOI
2212
Quantize all levels of the network to reduce computational cost. Works better
than expected.
Quantization (signal), Training, Computer crashes,
Convolutional neural networks, Machine translation, quantization
BibRef
Yang, T.J.N.[Tao-Jian-Nan],
Zhu, S.J.[Si-Jie],
Mendieta, M.[Matias],
Wang, P.[Pu],
Balakrishnan, R.[Ravikumar],
Lee, M.W.[Min-Woo],
Han, T.[Tao],
Shah, M.[Mubarak],
Chen, C.[Chen],
MutualNet: Adaptive ConvNet via Mutual Learning From Different Model
Configurations,
PAMI(45), No. 1, January 2023, pp. 811-827.
IEEE DOI
2212
Training, Adaptation models, Task analysis, Adaptive systems,
Computational modeling, Complexity theory, Neural networks, deep learning
BibRef
Yang, T.J.N.[Tao-Jian-Nan],
Zhu, S.J.[Si-Jie],
Chen, C.[Chen],
Yan, S.[Shen],
Zhang, M.[Mi],
Willis, A.[Andrew],
MutualNet: Adaptive Convnet via Mutual Learning from Network Width and
Resolution,
ECCV20(I:299-315).
Springer DOI
2011
Code, ConvNet.
WWW Link. Executable with dynamic resources.
BibRef
Jahani-Nezhad, T.[Tayyebeh],
Maddah-Ali, M.A.[Mohammad Ali],
Berrut Approximated Coded Computing:
Straggler Resistance Beyond Polynomial Computing,
PAMI(45), No. 1, January 2023, pp. 111-122.
IEEE DOI
2212
Training with parallel systems.
Interpolation, Servers, Codes, Computational modeling, Task analysis,
Numerical models, Encoding, Distributed learning, coded computing
BibRef
Wu, Y.M.[Yi-Ming],
Li, R.X.[Rui-Xiang],
Yu, Y.L.[Yun-Long],
Li, X.[Xi],
Reparameterized attention for convolutional neural networks,
PRL(164), 2022, pp. 89-95.
Elsevier DOI
2212
Attention mechanism, Bayesian variational inference,
Reparameterization, Uncertainty, Batch shaping
BibRef
Xi, Y.[Yue],
Jia, W.J.[Wen-Jing],
Miao, Q.G.[Qi-Guang],
Liu, X.Z.[Xiang-Zeng],
Fan, X.C.[Xiao-Chen],
Lou, J.[Jian],
DyCC-NEt: Dynamic Context Collection Network for Input-Aware
Drone-View Object Detection,
RS(14), No. 24, 2022, pp. xx-yy.
DOI Link
2212
To deploy in lightweight vehicles.
BibRef
Li, C.L.[Chang-Lin],
Wang, G.R.[Guang-Run],
Wang, B.[Bing],
Liang, X.D.[Xiao-Dan],
Li, Z.H.[Zhi-Hui],
Chang, X.J.[Xiao-Jun],
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Vision Transformers,
PAMI(45), No. 4, April 2023, pp. 4430-4446.
IEEE DOI
2303
Training, Logic gates, Routing, Transformers, Neural networks,
Optimization, Adaptive inference, dynamic networks, vision transformer
BibRef
Wang, L.G.[Long-Guang],
Guo, Y.L.[Yu-Lan],
Dong, X.Y.[Xiao-Yu],
Wang, Y.Q.[Ying-Qian],
Ying, X.Y.[Xin-Yi],
Lin, Z.P.[Zai-Ping],
An, W.[Wei],
Exploring Fine-Grained Sparsity in Convolutional Neural Networks for
Efficient Inference,
PAMI(45), No. 4, April 2023, pp. 4474-4493.
IEEE DOI
2303
Point cloud compression, Semantics, Biological neural networks,
Task analysis, Neurons, Image segmentation, Costs, Neural network,
stereo matching
BibRef
Wang, S.[Siyu],
Li, W.[WeiPeng],
Lu, R.[Ruitao],
Yang, X.G.[Xiao-Gang],
Xi, J.X.[Jian-Xiang],
Gao, J.[Jiuan],
Neural network acceleration methods via selective activation,
IET-CV(17), No. 3, 2023, pp. 295-308.
DOI Link
2305
convolutional neural nets, image processing, neural net architecture
BibRef
Xiao, P.H.[Peng-Hao],
Xu, T.[Teng],
Xiao, X.Y.[Xia-Yang],
Li, W.S.[Wei-Song],
Wang, H.P.[Hai-Peng],
Distillation Sparsity Training Algorithm for Accelerating
Convolutional Neural Networks in Embedded Systems,
RS(15), No. 10, 2023, pp. xx-yy.
DOI Link
2306
BibRef
Haider, U.[Usman],
Hanif, M.[Muhammad],
Rashid, A.[Ahmar],
Hussain, S.F.[Syed Fawad],
Dictionary-enabled efficient training of ConvNets for image
classification,
IVC(135), 2023, pp. 104718.
Elsevier DOI
2306
Sparse representation, Convolution neural networks,
Deep learning, Dictionary learning, Image classification
BibRef
Huang, L.[Lei],
Qin, J.[Jie],
Zhou, Y.[Yi],
Zhu, F.[Fan],
Liu, L.[Li],
Shao, L.[Ling],
Normalization Techniques in Training DNNs: Methodology, Analysis and
Application,
PAMI(45), No. 8, August 2023, pp. 10173-10196.
IEEE DOI
2307
Training, Optimization, Covariance matrices, Task analysis, Tensors,
Decorrelation, Biological neural networks, Batch normalization,
weight normalization
BibRef
Zhang, H.[Hu],
Zu, K.[Keke],
Lu, J.[Jian],
Zou, Y.[Yuru],
Meng, D.Y.[De-Yu],
Epsanet: An Efficient Pyramid Squeeze Attention Block on Convolutional
Neural Network,
ACCV22(III:541-557).
Springer DOI
2307
BibRef
Dong, M.J.[Min-Jing],
Chen, X.H.[Xing-Hao],
Wang, Y.H.[Yun-He],
Xu, C.[Chang],
Improving Lightweight AdderNet via Distillation from L2 to L1-norm,
IP(32), 2023, pp. 5524-5536.
IEEE DOI
2310
Adder Neural Networks (ANNs) are proposed to replace expensive
multiplication operations in Convolutional Neural Networks (CNNs) with
cheap additions.
BibRef
Zhang, H.[Hao],
Lai, S.Q.[Shen-Qi],
Wang, Y.X.[Ya-Xiong],
Da, Z.Y.[Zong-Yang],
Dun, Y.J.[Yu-Jie],
Qian, X.M.[Xue-Ming],
SCGNet: Shifting and Cascaded Group Network,
CirSysVideo(33), No. 9, September 2023, pp. 4997-5008.
IEEE DOI
2310
BibRef
Sepehri, Y.M.[Ya-Min],
Pad, P.[Pedram],
Kündig, C.[Clément],
Frossard, P.[Pascal],
Dunbar, L.A.[L. Andrea],
Privacy-Preserving Image Acquisition for Neural Vision Systems,
MultMed(25), 2023, pp. 6232-6244.
IEEE DOI
2311
BibRef
Pad, P.[Pedram],
Narduzzi, S.[Simon],
Kündig, C.[Clément],
Türetken, E.[Engin],
Bigdeli, S.A.[Siavash A.],
Dunbar, L.A.[L. Andrea],
Efficient Neural Vision Systems Based on Convolutional Image
Acquisition,
CVPR20(12282-12291)
IEEE DOI
2008
Optical imaging, Convolution, Optical sensors, Kernel,
Optical computing, Optical network units, Optical filters
BibRef
Liu, M.[Min],
Zhou, C.C.[Chang-Chun],
Qiu, S.Y.[Si-Yuan],
He, Y.F.[Yi-Fan],
Jiao, H.L.[Hai-Long],
CNN Accelerator at the Edge With Adaptive Zero Skipping and
Sparsity-Driven Data Flow,
CirSysVideo(33), No. 12, December 2023, pp. 7084-7095.
IEEE DOI
2312
BibRef
Lin, S.H.[Shao-Hui],
Ji, B.[Bo],
Ji, R.R.[Rong-Rong],
Yao, A.[Angela],
A closer look at branch classifiers of multi-exit architectures,
CVIU(239), 2024, pp. 103900.
Elsevier DOI
2402
Multi-exit architectures, Knowledge consistency,
Branch classifiers, Model compression and acceleration
BibRef
Metta, C.[Carlo],
Fantozzi, M.[Marco],
Papini, A.[Andrea],
Amato, G.[Gianluca],
Bergamaschi, M.[Matteo],
Galfrč, S.G.[Silvia Giulia],
Marchetti, A.[Alessandro],
Vegliň, M.[Michelangelo],
Parton, M.[Maurizio],
Morandin, F.[Francesco],
Increasing biases can be more efficient than increasing weights,
WACV24(2798-2807)
IEEE DOI
2404
Weight measurement, Analytical models, Source coding,
Computational modeling, Neural networks, Focusing, Algorithms, and algorithms
BibRef
Pu, Y.F.[Yi-Fan],
Han, Y.Z.[Yi-Zeng],
Wang, Y.L.[Yu-Lin],
Feng, J.L.[Jun-Lan],
Deng, C.[Chao],
Huang, G.[Gao],
Fine-Grained Recognition With Learnable Semantic Data Augmentation,
IP(33), 2024, pp. 3130-3144.
IEEE DOI Code:
WWW Link.
2405
Data augmentation, Semantics, Training, Visualization, Metalearning,
Covariance matrices, Task analysis, Fine-grained recognition,
deep learning
BibRef
Han, Y.Z.[Yi-Zeng],
Han, D.C.[Dong-Chen],
Liu, Z.[Zeyu],
Wang, Y.L.[Yu-Lin],
Pan, X.[Xuran],
Pu, Y.F.[Yi-Fan],
Deng, C.[Chao],
Feng, J.[Junlan],
Song, S.[Shiji],
Huang, G.[Gao],
Dynamic Perceiver for Efficient Visual Recognition,
ICCV23(5969-5979)
IEEE DOI Code:
WWW Link.
2401
BibRef
Liu, Y.[Ying],
Xue, J.H.[Jia-Hao],
Li, D.X.[Da-Xiang],
Zhang, W.D.[Wei-Dong],
Chiew, T.K.[Tuan Kiang],
Xu, Z.J.[Zhi-Jie],
Image recognition based on lightweight convolutional neural network:
Recent advances,
IVC(146), 2024, pp. 105037.
Elsevier DOI
2405
Image recognition, Lightweight network, Model compression,
Optimization of lightweight network, Transformer
BibRef
Zhang, H.[Hao],
Dun, Y.J.[Yu-Jie],
Pei, Y.X.[Yi-Xuan],
Lai, S.[Shenqi],
Liu, C.X.[Cheng-Xu],
Zhang, K.[Kaipeng],
Qian, X.M.[Xue-Ming],
HF-HRNet: A Simple Hardware Friendly High-Resolution Network,
CirSysVideo(34), No. 8, August 2024, pp. 7699-7711.
IEEE DOI Code:
WWW Link.
2408
Pose estimation, Solid modeling, Mobile handsets, Kernel, Delays,
Convolution, Network architecture, Human pose estimation, networks
BibRef
Wang, S.[Shiye],
Feng, K.[Kaituo],
Li, C.S.[Chang-Sheng],
Yuan, Y.[Ye],
Wang, G.R.[Guo-Ren],
Learning to Generate Parameters of ConvNets for Unseen Image Data,
IP(33), 2024, pp. 5577-5592.
IEEE DOI
2410
Training, Task analysis, Correlation, Metalearning,
Graphics processing units, Vectors, Adaptive systems,
adaptive hyper-recurrent units
BibRef
Zhu, Z.[Zeqi],
Pourtaherian, A.[Arash],
Waeijen, L.[Luc],
Akkaya, I.B.[Ibrahim Batuhan],
Bondarev, E.[Egor],
Moreira, O.[Orlando],
CATS: Combined Activation and Temporal Suppression for Efficient
Network Inference,
WACV24(8151-8160)
IEEE DOI
2404
Training, Energy consumption, Program processors,
Computational modeling, Redundancy, Energy conservation,
Smartphones / end user devices
BibRef
Jeon, J.Y.[Ji-Ye],
Nguyen, X.T.[Xuan Truong],
Ryu, S.[Soojung],
Lee, H.J.[Hyuk-Jae],
USDN: A Unified Sample-wise Dynamic Network with Mixed-Precision and
Early-Exit,
WACV24(635-643)
IEEE DOI
2404
Degradation, Quantization (signal), Costs, Computational modeling,
Artificial neural networks, Complexity theory, Algorithms,
Embedded sensing / real-time techniques
BibRef
Quélennec, A.[Aël],
Tartaglione, E.[Enzo],
Mozharovskyi, P.[Pavlo],
Nguyen, V.T.[Van-Tam],
Towards On-Device Learning on the Edge: Ways to Select Neurons to
Update Under a Budget Constraint,
IoTDesign24(685-694)
IEEE DOI
2404
Backpropagation, Costs, Computational modeling, Neurons, Memory management
BibRef
Mori, P.[Pierpaolo],
Frickenstein, L.[Lukas],
Sampath, S.B.[Shambhavi Balamuthu],
Thoma, M.[Moritz],
Fasfous, N.[Nael],
Vemparala, M.R.[Manoj Rohit],
Frickenstein, A.[Alexander],
Unger, C.[Christian],
Stechele, W.[Walter],
Mueller-Gritschneder, D.[Daniel],
Passerone, C.[Claudio],
Wino Vidi Vici: Conquering Numerical Instability of 8-bit Winograd
Convolution for Accurate Inference Acceleration on Edge,
WACV24(53-62)
IEEE DOI
2404
Training, Degradation, Quantization (signal), Convolution,
Transforms, Inference algorithms, Numerical models, Algorithms
BibRef
Haque, M.[Mirazul],
Chen, S.[Simin],
Haque, W.[Wasif],
Liu, C.[Cong],
Yang, W.[Wei],
AntiNODE: Evaluating Efficiency Robustness of Neural ODEs,
REDLCV23(1499-1509)
IEEE DOI
2401
BibRef
Huang, Z.P.[Zhi-Peng],
Zhang, Z.Z.[Zhi-Zheng],
Lan, C.L.[Cui-Ling],
Zha, Z.J.[Zheng-Jun],
Lu, Y.[Yan],
Guo, B.[Baining],
Adaptive Frequency Filters As Efficient Global Token Mixers,
ICCV23(6026-6036)
IEEE DOI Code:
WWW Link.
2401
BibRef
Park, S.[Song],
Chun, S.[Sanghyuk],
Heo, B.[Byeongho],
Kim, W.[Wonjae],
Yun, S.[Sangdoo],
SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel
Storage,
ICCV23(17202-17213)
IEEE DOI Code:
WWW Link.
2401
BibRef
Ancilotto, A.[Alberto],
Paissan, F.[Francesco],
Farella, E.[Elisabetta],
XiNet: Efficient Neural Networks for tinyML,
ICCV23(16922-16931)
IEEE DOI
2401
BibRef
Zhang, J.N.[Jiang-Ning],
Li, X.T.[Xiang-Tai],
Li, J.[Jian],
Liu, L.[Liang],
Xue, Z.[Zhucun],
Zhang, B.[Boshen],
Jiang, Z.K.[Zheng-Kai],
Huang, T.X.[Tian-Xin],
Wang, Y.[Yabiao],
Wang, C.J.[Cheng-Jie],
Rethinking Mobile Block for Efficient Attention-based Models,
ICCV23(1389-1400)
IEEE DOI
2401
BibRef
Lazzaro, D.[Dario],
Cinŕ, A.E.[Antonio Emanuele],
Pintor, M.[Maura],
Demontis, A.[Ambra],
Biggio, B.[Battista],
Roli, F.[Fabio],
Pelillo, M.[Marcello],
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware
Training,
CIAP23(II:515-526).
Springer DOI
2312
BibRef
Polina, K.[Karpikova],
Ekaterina, R.[Radionova],
Anastasia, Y.[Yaschenko],
Andrei, S.[Spiridonov],
Leonid, K.[Kostyushko],
Riccardo, F.[Fabbricatore],
Aleksei, I.[Ivakhnenko],
FIANCEE: Faster Inference of Adversarial Networks via Conditional
Early Exits,
CVPR23(12032-12043)
IEEE DOI
2309
BibRef
Li, J.F.[Jia-Feng],
Wen, Y.[Ying],
He, L.H.[Liang-Hua],
SCConv: Spatial and Channel Reconstruction Convolution for Feature
Redundancy,
CVPR23(6153-6162)
IEEE DOI
2309
BibRef
Endo, T.[Takeshi],
Kaji, S.[Seigo],
Matono, H.[Haruki],
Takemura, M.[Masayuki],
Shima, T.[Takeshi],
Re-Parameterization Making GC-Net-Style 3dconvnets More Efficient,
ACCV22(I:311-325).
Springer DOI
2307
BibRef
Bertrand, T.[Théo],
Makaroff, N.[Nicolas],
Cohen, L.D.[Laurent D.],
Fast Marching Energy CNN,
SSVM23(276-287).
Springer DOI
2307
BibRef
Cannella, C.[Chris],
Tarokh, V.[Vahid],
Semi-Empirical Objective Functions for MCMC Proposal Optimization,
ICPR22(4758-4764)
IEEE DOI
2212
Weight measurement, Training, Linear programming,
Behavioral sciences, Proposals, Optimization
BibRef
Dai, L.J.[Ling-Jun],
Zhang, Q.T.[Qing-Tian],
Wu, H.Q.[Hua-Qiang],
Improving the accuracy of neural networks in analog
computing-in-memory systems by analog weight,
ICPR22(2971-2978)
IEEE DOI
2212
Degradation, Weight measurement, Performance evaluation,
Quantization (signal), Neural networks, Programming, Energy efficiency
BibRef
Zhou, H.[Han],
Ashrafi, A.[Aida],
Blaschko, M.B.[Matthew B.],
Combinatorial optimization for low bit-width neural networks,
ICPR22(2246-2252)
IEEE DOI
2212
Training, Neural networks, Nonhomogeneous media, Hardware,
Data models, Risk management
BibRef
Trusov, A.[Anton],
Limonova, E.[Elena],
Nikolaev, D.[Dmitry],
Arlazarov, V.V.[Vladimir V.],
Fast matrix multiplication for binary and ternary CNNs on ARM CPU,
ICPR22(3176-3182)
IEEE DOI
2212
Neon, Neural networks, Memory management, Inference algorithms,
Mobile handsets, Libraries, Computational efficiency
BibRef
Ignatov, A.[Andrey],
Malivenko, G.[Grigory],
Timofte, R.[Radu],
Tseng, Y.[Yu],
Xu, Y.S.[Yu-Syuan],
Yu, P.H.[Po-Hsiang],
Chiang, C.M.[Cheng-Ming],
Kuo, H.K.[Hsien-Kai],
Chen, M.H.[Min-Hung],
Cheng, C.M.[Chia-Ming],
Van Gool, L.J.[Luc J.],
PyNet-V2 Mobile:
Efficient On-Device Photo Processing With Neural Networks,
ICPR22(677-684)
IEEE DOI
2212
Performance evaluation, Visualization, Pipelines,
Cameras, Mobile handsets, Software
BibRef
Liang, Z.W.[Zhi-Wei],
Zhou, Y.[Yuezhi],
Dispense Mode for Inference to Accelerate Branchynet,
ICIP22(1246-1250)
IEEE DOI
2211
Improve time by dropping samples, but has quality trade-off.
Deep learning, Computational modeling, Neural networks, Throughput,
Internet of Things, deep neural networks, model optimization
BibRef
Duggal, R.[Rahul],
Zhou, H.[Hao],
Yang, S.[Shuo],
Fang, J.[Jun],
Xiong, Y.J.[Yuan-Jun],
Xia, W.[Wei],
Towards Regression-Free Neural Networks for Diverse Compute Platforms,
ECCV22(XXXVII:598-614).
Springer DOI
2211
BibRef
Yong, H.W.[Hong-Wei],
Zhang, L.[Lei],
An Embedded Feature Whitening Approach to Deep Neural Network
Optimization,
ECCV22(XXIII:334-351).
Springer DOI
2211
BibRef
Kwan, H.M.[Ho Man],
Song, S.[Shenghui],
SSBNet: Improving Visual Recognition Efficiency by Adaptive Sampling,
ECCV22(XXI:229-244).
Springer DOI
2211
BibRef
Upadhyay, U.[Uddeshya],
Karthik, S.[Shyamgopal],
Chen, Y.B.[Yan-Bei],
Mancini, M.[Massimiliano],
Akata, Z.[Zeynep],
BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks,
ECCV22(XII:299-317).
Springer DOI
2211
BibRef
Lu, Y.[Yao],
Yang, W.[Wen],
Zhang, Y.Z.[Yun-Zhe],
Chen, Z.H.[Zuo-Hui],
Chen, J.Y.[Jin-Yin],
Xuan, Q.[Qi],
Wang, Z.[Zhen],
Yang, X.[Xiaoniu],
Understanding the Dynamics of DNNs Using Graph Modularity,
ECCV22(XII:225-242).
Springer DOI
2211
BibRef
Molchanov, P.[Pavlo],
Hall, J.[Jimmy],
Yin, H.X.[Hong-Xu],
Kautz, J.[Jan],
Fusi, N.[Nicolo],
Vahdat, A.[Arash],
LANA: Latency Aware Network Acceleration,
ECCV22(XII:137-156).
Springer DOI
2211
BibRef
Han, Y.Z.[Yi-Zeng],
Pu, Y.F.[Yi-Fan],
Lai, Z.H.[Zi-Hang],
Wang, C.F.[Chao-Fei],
Song, S.J.[Shi-Ji],
Cao, J.F.[Jun-Feng],
Huang, W.H.[Wen-Hui],
Deng, C.[Chao],
Huang, G.[Gao],
Learning to Weight Samples for Dynamic Early-Exiting Networks,
ECCV22(XI:362-378).
Springer DOI
2211
BibRef
Hu, Q.H.[Qing-Hao],
Li, G.[Gang],
Wu, Q.[Qiman],
Cheng, J.[Jian],
PalQuant:
Accelerating High-Precision Networks on Low-Precision Accelerators,
ECCV22(XI:312-327).
Springer DOI
2211
BibRef
Yang, X.C.[Xue-Can],
Chaudhuri, S.[Sumanta],
Likforman, L.[Laurence],
Naviner, L.[Lirida],
Minconvnets: a New Class of Multiplication-Less Neural Networks,
ICIP22(881-885)
IEEE DOI
2211
Training, Correlation, Quantization (signal), Transfer learning,
Neural networks, Hardware
BibRef
Shipard, J.[Jordan],
Wiliem, A.[Arnold],
Fookes, C.[Clinton],
Does Interference Exist When Training a Once-For-All Network?,
EVW22(3618-3627)
IEEE DOI
2210
WWW Link. Training, Codes, Sociology, Neural networks, Interference
BibRef
Zhang, H.[Hang],
Wu, C.[Chongruo],
Zhang, Z.Y.[Zhong-Yue],
Zhu, Y.[Yi],
Lin, H.B.[Hai-Bin],
Zhang, Z.[Zhi],
Sun, Y.[Yue],
He, T.[Tong],
Mueller, J.[Jonas],
Manmatha, R.,
Li, M.[Mu],
Smola, A.[Alexander],
ResNeSt: Split-Attention Networks,
ECV22(2735-2745)
IEEE DOI
2210
Deep learning, Computational modeling,
Convolutional neural networks
BibRef
Chen, J.[Jierun],
He, T.L.[Tian-Lang],
Zhuo, W.P.[Wei-Peng],
Ma, L.[Li],
Ha, S.[Sangtae],
Chan, S.H.G.[S.H. Gary],
TVConv: Efficient Translation Variant Convolution for Layout-aware
Visual Processing,
CVPR22(12538-12548)
IEEE DOI
2210
WWW Link. Training, Visualization, Convolution, Face recognition, Microscopy,
Network architecture, Vision applications and systems
BibRef
Hu, M.[Mu],
Feng, J.[Junyi],
Hua, J.S.[Jia-Shen],
Lai, B.[Baisheng],
Huang, J.Q.[Jian-Qiang],
Gong, X.J.[Xiao-Jin],
Hua, X.S.[Xian-Sheng],
Online Convolutional Reparameterization,
CVPR22(558-567)
IEEE DOI
2210
Training, Convolutional codes, Costs, Convolution,
Biological system modeling, Computational modeling,
Efficient learning and inferences
BibRef
Ding, X.H.[Xiao-Han],
Chen, H.H.[Hong-Hao],
Zhang, X.Y.[Xiang-Yu],
Han, J.G.[Jun-Gong],
Ding, G.[Guiguang],
RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality,
CVPR22(568-577)
IEEE DOI
2210
Convolutional codes, Image recognition, Computational modeling,
Semantics, Merging, Feature extraction,
Deep learning architectures and techniques
BibRef
Eisenberger, M.[Marvin],
Toker, A.[Aysim],
Leal-Taixé, L.[Laura],
Bernard, F.[Florian],
Cremers, D.[Daniel],
A Unified Framework for Implicit Sinkhorn Differentiation,
CVPR22(499-508)
IEEE DOI
2210
Training, Costs, Neural networks, Graphics processing units,
Organizations, Approximation algorithms, Optimization methods, Machine learning
BibRef
Wang, L.G.[Long-Guang],
Dong, X.Y.[Xiao-Yu],
Wang, Y.Q.[Ying-Qian],
Liu, L.[Li],
An, W.[Wei],
Guo, Y.L.[Yu-Lan],
Learnable Lookup Table for Neural Network Quantization,
CVPR22(12413-12423)
IEEE DOI
2210
Training, Point cloud compression, Quantization (signal),
Neural networks, Superresolution, Computational efficiency,
Low-level vision
BibRef
Shen, L.[Lulan],
Ziaeefard, M.[Maryam],
Meyer, B.[Brett],
Gross, W.[Warren],
Clark, J.J.[James J.],
Conjugate Adder Net (CAddNet) - a Space-Efficient Approximate CNN,
ECV22(2792-2796)
IEEE DOI
2210
Training, Deep learning, Neural networks, Logic gates,
Complexity theory
BibRef
Cho, Y.S.[Yoo-Shin],
Cho, H.[Hanbyel],
Kim, Y.S.[Young-Soo],
Kim, J.[Junmo],
Improving Generalization of Batch Whitening by Convolutional Unit
Optimization,
ICCV21(5301-5309)
IEEE DOI
2203
Transforming input features to have a zero mean and unit variance.
Convolutional codes, Training, Correlation, Dogs, Transforms,
Stability analysis, Decorrelation, Recognition and classification
BibRef
Khani, M.[Mehrdad],
Hamadanian, P.[Pouya],
Nasr-Esfahany, A.[Arash],
Alizadeh, M.[Mohammad],
Real-Time Video Inference on Edge Devices via Adaptive Model
Streaming,
ICCV21(4552-4562)
IEEE DOI
2203
Performance evaluation, Training, Adaptation models,
Computational modeling, Semantics, Bandwidth, Streaming media,
grouping and shape
BibRef
Liu, J.[Jie],
Li, C.[Chuming],
Liang, F.[Feng],
Lin, C.[Chen],
Sun, M.[Ming],
Yan, J.J.[Jun-Jie],
Ouyang, W.L.[Wan-Li],
Xu, D.[Dong],
Inception Convolution with Efficient Dilation Search,
CVPR21(11481-11490)
IEEE DOI
2111
Image segmentation, Image recognition, Convolution,
Pose estimation, Object detection, Performance gain
BibRef
Feng, J.W.[Jian-Wei],
Huang, D.[Dong],
Optimal Gradient Checkpoint Search for Arbitrary Computation Graphs,
CVPR21(11428-11437)
IEEE DOI
2111
Training, Costs, Tensors, Image resolution, Memory management,
Graphics processing units, Manuals
BibRef
Malinowski, M.[Mateusz],
Vytiniotis, D.[Dimitrios],
Swirszcz, G.[Grzegorz],
Patraucean, V.[Viorica],
Carreira, J.[Joăo],
Gradient Forward-Propagation for Large-Scale Temporal Video Modelling,
CVPR21(9245-9255)
IEEE DOI
2111
Training, Couplings,
Computational modeling, Parallel processing, Streaming media, Feature extraction
BibRef
Ghodrati, A.[Amir],
Bejnordi, B.E.[Babak Ehteshami],
Habibian, A.[Amirhossein],
FrameExit: Conditional Early Exiting for Efficient Video Recognition,
CVPR21(15603-15613)
IEEE DOI
2111
Costs, Computational modeling, Logic gates,
Benchmark testing, Network architecture
BibRef
Li, H.D.[Heng-Duo],
Wu, Z.X.[Zu-Xuan],
Shrivastava, A.[Abhinav],
Davis, L.S.[Larry S.],
2D or not 2D? Adaptive 3D Convolution Selection for Efficient Video
Recognition,
CVPR21(6151-6160)
IEEE DOI
2111
Solid modeling, Gradient methods, Computational modeling, Predictive models
BibRef
Zhou, X.[Xiao],
Zhang, W.Z.[Wei-Zhong],
Xu, H.[Hang],
Zhang, T.[Tong],
Effective Sparsification of Neural Networks with Global Sparsity
Constraint,
CVPR21(3598-3607)
IEEE DOI
2111
Weight measurement, Training, Neural networks, Redundancy, Manuals,
Tools, Probabilistic logic
BibRef
Zhang, M.[Mingda],
Chu, C.T.[Chun-Te],
Zhmoginov, A.[Andrey],
Howard, A.[Andrew],
Jou, B.[Brendan],
Zhu, Y.K.[Yu-Kun],
Zhang, L.[Li],
Hwa, R.[Rebecca],
Kovashka, A.[Adriana],
BasisNet: Two-stage Model Synthesis for Efficient Inference,
ECV21(3075-3084)
IEEE DOI
2109
efficient neural network architectures, conditional computation, and
early termination.
Training, Computational modeling,
Neural networks, Predictive models
BibRef
Zhang, C.[Chen],
Xu, Y.H.[Ying-Hao],
Shen, Y.J.[Yu-Jun],
CompConv: A Compact Convolution Module for Efficient Feature Learning,
ECV21(3006-3015)
IEEE DOI
2109
Convolution, Computational modeling,
Benchmark testing, Computational efficiency
BibRef
Chin, T.W.[Ting-Wu],
Marculescu, D.[Diana],
Morcos, A.S.[Ari S.],
Width transfer: on the (in)variance of width optimization,
ECV21(2984-2993)
IEEE DOI
2109
Training, Design methodology,
Training data, Optimization methods, Network architecture
BibRef
Yang, H.J.[Hao-Jin],
Shen, Z.[Zhen],
Zhao, Y.C.[Yu-Cheng],
AsymmNet: Towards ultralight convolution neural networks using
asymmetrical bottlenecks,
MAI21(2339-2348)
IEEE DOI
2109
Convolutional codes,
Computational modeling, Neural networks, Computer architecture,
Pattern recognition
BibRef
Elhoushi, M.[Mostafa],
Chen, Z.H.[Zi-Hao],
Shafiq, F.[Farhan],
Tian, Y.H.[Ye Henry],
Li, J.Y.W.[Joey Yi-Wei],
DeepShift: Towards Multiplication-Less Neural Networks,
MAI21(2359-2368)
IEEE DOI
2109
Training, Convolutional codes, Computational modeling,
Neural networks, Graphics processing units, Mobile handsets,
Pattern recognition
BibRef
Hong, M.F.[Min-Fong],
Chen, H.Y.[Hao-Yun],
Chen, M.H.[Min-Hung],
Xu, Y.S.[Yu-Syuan],
Kuo, H.K.[Hsien-Kai],
Tsai, Y.M.[Yi-Min],
Chen, H.J.[Hung-Jen],
Jou, K.[Kevin],
Network Space Search for Pareto-Efficient Spaces,
ECV21(3047-3056)
IEEE DOI
2109
Error analysis, Focusing, Manuals,
Search problems
BibRef
Lou, W.[Wei],
Xun, L.[Lei],
Sabet, A.[Amin],
Bi, J.[Jia],
Hare, J.[Jonathon],
Merrett, G.V.[Geoff V.],
Dynamic-OFA: Runtime DNN Architecture Switching for Performance
Scaling on Heterogeneous Embedded Platforms,
ECV21(3104-3112)
IEEE DOI
2109
Training, Computational modeling,
Pipelines, Graphics processing units, Switches
BibRef
Avalos-López, J.I.[Jorge Ivan],
Rojas-Domínguez, A.[Alfonso],
Ornelas-Rodríguez, M.[Manuel],
Carpio, M.[Martín],
Valdez, S.I.[S. Ivvan],
Efficient Training of Deep Learning Models Through Improved Adaptive
Sampling,
MCPR21(141-152).
Springer DOI
2108
BibRef
Cai, S.F.[Shao-Feng],
Shu, Y.[Yao],
Wang, W.[Wei],
Dynamic Routing Networks,
WACV21(3587-3596)
IEEE DOI
2106
Training, Visualization, Computational modeling,
Neural networks, Computer architecture
BibRef
Ikami, D.[Daiki],
Irie, G.[Go],
Shibata, T.[Takashi],
Constrained Weight Optimization for Learning without Activation
Normalization,
WACV21(2605-2613)
IEEE DOI
2106
Deep learning, Perturbation methods,
MIMICs, Benchmark testing, Explosions
BibRef
Zhang, Y.[Yu],
Wu, X.Y.[Xiao-Yu],
Zhu, R.L.[Ruo-Lin],
Adaptive Word Embedding Module for Semantic Reasoning in Large-scale
Detection,
ICPR21(2103-2109)
IEEE DOI
2105
External semantic information for CNNs.
Adaptive systems, Annotations, Image edge detection, Semantics,
Object detection, Cognition, object detection,
knowledge transfer
BibRef
Barlaud, M.[Michel],
Guyard, F.[Frédéric],
Learning sparse deep neural networks using efficient structured
projections on convex constraints for green AI,
ICPR21(1566-1573)
IEEE DOI
2105
Training, Gradient methods, Neural networks,
Computational efficiency, Projection algorithms, Artificial intelligence
BibRef
Chitsaz, K.[Kamran],
Hajabdollahi, M.[Mohsen],
Khadivi, P.[Pejman],
Samavi, S.[Shadrokh],
Karimi, N.[Nader],
Shirani, S.[Shahram],
Use of Frequency Domain for Complexity Reduction of Convolutional
Neural Networks,
MLCSA20(64-74).
Springer DOI
2103
BibRef
Berthelier, A.[Anthony],
Yan, Y.Z.[Yong-Zhe],
Chateau, T.[Thierry],
Blanc, C.[Christophe],
Duffner, S.[Stefan],
Garcia, C.[Christophe],
Learning Sparse Filters in Deep Convolutional Neural Networks with a
L1/l2 Pseudo-norm,
CADL20(662-676).
Springer DOI
2103
BibRef
Zhang, L.,
Two recent advances on normalization methods for deep neural network
optimization,
VCIP20(1-1)
IEEE DOI
2102
Training, Optimization, Neural networks, Standardization,
Pattern analysis, Imaging
BibRef
Du, K.Y.[Kun-Yuan],
Zhang, Y.[Ya],
Guan, H.B.[Hai-Bing],
Tian, Q.[Qi],
Wang, Y.F.[Yan-Feng],
Cheng, S.G.[Sheng-Gan],
Lin, J.[James],
FTL: A Universal Framework for Training Low-bit DNNs via Feature
Transfer,
ECCV20(XXV:700-716).
Springer DOI
2011
BibRef
Jiang, Z.X.[Zi-Xuan],
Zhu, K.[Keren],
Liu, M.J.[Ming-Jie],
Gu, J.Q.[Jia-Qi],
Pan, D.Z.[David Z.],
An Efficient Training Framework for Reversible Neural Architectures,
ECCV20(XXVII:275-289).
Springer DOI
2011
Trade memory requirements for computation.
BibRef
Herrmann, C.[Charles],
Bowen, R.S.[Richard Strong],
Zabih, R.[Ramin],
Channel Selection Using Gumbel Softmax,
ECCV20(XXVII:241-257).
Springer DOI
2011
Executing some layers, pruning, etc.
BibRef
Isikdogan, L.F.[Leo F.],
Nayak, B.V.[Bhavin V.],
Wu, C.T.[Chyuan-Tyng],
Moreira, J.P.[Joao Peralta],
Rao, S.[Sushma],
Michael, G.[Gilad],
SemifreddoNets: Partially Frozen Neural Networks for Efficient Computer
Vision Systems,
ECCV20(XXVII:193-208).
Springer DOI
2011
Partial frozen weights. Only change some weights in learning.
BibRef
Xie, X.,
Zhou, Y.,
Kung, S.Y.,
Exploring Highly Efficient Compact Neural Networks For Image
Classification,
ICIP20(2930-2934)
IEEE DOI
2011
Convolution, Standards, Neural networks, Computational efficiency,
Task analysis, Fuses, Lightweight network,
inter-group information exchange
BibRef
Ahn, S.,
Chang, J.W.,
Kang, S.J.,
An Efficient Accelerator Design Methodology For Deformable
Convolutional Networks,
ICIP20(3075-3079)
IEEE DOI
2011
IP networks, Erbium, Zirconium, Indexes, Hardware accelerator,
deformable convolution, system architecture, FPGA, deep learning
BibRef
Kehrenberg, T.[Thomas],
Bartlett, M.[Myles],
Thomas, O.[Oliver],
Quadrianto, N.[Novi],
Null-sampling for Interpretable and Fair Representations,
ECCV20(XXVI:565-580).
Springer DOI
2011
Code, CNN.
WWW Link.
BibRef
Malkin, N.[Nikolay],
Ortiz, A.[Anthony],
Jojic, N.[Nebojsa],
Mining Self-similarity: Label Super-resolution with Epitomic
Representations,
ECCV20(XXVI:531-547).
Springer DOI
2011
Learn from very large data-sets.
BibRef
Liu, Z.G.[Zhi-Gang],
Mattina, M.[Matthew],
Efficient Residue Number System Based Winograd Convolution,
ECCV20(XIX:53-68).
Springer DOI
2011
BibRef
Park, E.[Eunhyeok],
Yoo, S.J.[Sung-Joo],
Profit: A Novel Training Method for sub-4-bit Mobilenet Models,
ECCV20(VI:430-446).
Springer DOI
2011
BibRef
Shomron, G.[Gil],
Banner, R.[Ron],
Shkolnik, M.[Moran],
Weiser, U.[Uri],
Thanks for Nothing: Predicting Zero-valued Activations with Lightweight
Convolutional Neural Networks,
ECCV20(X:234-250).
Springer DOI
2011
BibRef
Su, Z.[Zhuo],
Fang, L.P.[Lin-Pu],
Kang, W.X.[Wen-Xiong],
Hu, D.[Dewen],
Pietikäinen, M.[Matti],
Liu, L.[Li],
Dynamic Group Convolution for Accelerating Convolutional Neural
Networks,
ECCV20(VI:138-155).
Springer DOI
2011
BibRef
Xie, Z.D.[Zhen-Da],
Zhang, Z.[Zheng],
Zhu, X.[Xizhou],
Huang, G.[Gao],
Lin, S.[Stephen],
Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation,
ECCV20(I:531-548).
Springer DOI
2011
Reduci superfluous computation in feature maps of CNNs.
BibRef
Phan, A.H.[Anh-Huy],
Sobolev, K.[Konstantin],
Sozykin, K.[Konstantin],
Ermilov, D.[Dmitry],
Gusak, J.[Julia],
Tichavský, P.[Petr],
Glukhov, V.[Valeriy],
Oseledets, I.[Ivan],
Cichocki, A.[Andrzej],
Stable Low-rank Tensor Decomposition for Compression of Convolutional
Neural Network,
ECCV20(XXIX: 522-539).
Springer DOI
2010
BibRef
Yong, H.W.[Hong-Wei],
Huang, J.Q.[Jian-Qiang],
Hua, X.S.[Xian-Sheng],
Zhang, L.[Lei],
Gradient Centralization: A New Optimization Technique for Deep Neural
Networks,
ECCV20(I:635-652).
Springer DOI
2011
BibRef
Yuan, Z.N.[Zhuo-Ning],
Guo, Z.S.[Zhi-Shuai],
Yu, X.T.[Xiao-Tian],
Wang, X.Y.[Xiao-Yu],
Yang, T.B.[Tian-Bao],
Accelerating Deep Learning with Millions of Classes,
ECCV20(XXIII:711-726).
Springer DOI
2011
BibRef
Vu, T.[Thanh],
Eder, M.[Marc],
Price, T.[True],
Frahm, J.M.[Jan-Michael],
Any-Width Networks,
EDLCV20(3018-3026)
IEEE DOI
2008
Adjust width as needed.
Training, Switches, Standards, Convolution,
Inference algorithms
BibRef
Elsen, E.,
Dukhan, M.,
Gale, T.,
Simonyan, K.,
Fast Sparse ConvNets,
CVPR20(14617-14626)
IEEE DOI
2008
Kernel, Sparse matrices, Neural networks,
Standards, Computational modeling, Acceleration
BibRef
Song, G.L.[Guang-Lu],
Liu, Y.[Yu],
Wang, X.G.[Xiao-Gang],
Revisiting the Sibling Head in Object Detector,
CVPR20(11560-11569)
IEEE DOI
2008
in R-CNN.
Task analysis, Proposals, Detectors, Feature extraction, Training,
Google, Sensitivity
BibRef
Wang, Q.L.[Qi-Long],
Wu, B.G.[Bang-Gu],
Zhu, P.F.[Peng-Fei],
Li, P.H.[Pei-Hua],
Zuo, W.M.[Wang-Meng],
Hu, Q.H.[Qing-Hua],
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural
Networks,
CVPR20(11531-11539)
IEEE DOI
2008
Convolution, Complexity theory, Dimensionality reduction, Kernel,
Adaptation models, Computational modeling, Convolutional neural networks
BibRef
Chen, Y.P.[Yin-Peng],
Dai, X.Y.[Xi-Yang],
Liu, M.C.[Meng-Chen],
Chen, D.D.[Dong-Dong],
Yuan, L.[Lu],
Liu, Z.C.[Zi-Cheng],
Dynamic Convolution: Attention Over Convolution Kernels,
CVPR20(11027-11036)
IEEE DOI
2008
Expand as needed.
Convolution, Kernel, Neural networks,
Computational efficiency, Computational modeling, Training
BibRef
Xie, Q.Z.[Qi-Zhe],
Luong, M.T.[Minh-Thang],
Hovy, E.[Eduard],
Le, Q.V.[Quoc V.],
Self-Training With Noisy Student Improves ImageNet Classification,
CVPR20(10684-10695)
IEEE DOI
2008
Noise measurement, Training, Stochastic processes, Robustness,
Entropy, Data models, Image resolution
BibRef
Verelst, T.[Thomas],
Tuytelaars, T.[Tinne],
BlockCopy: High-Resolution Video Processing with Block-Sparse Feature
Propagation and Online Policies,
ICCV21(5138-5147)
IEEE DOI
2203
Training, Image segmentation, Computational modeling, Semantics,
Pipelines, Reinforcement learning, Video analysis and understanding
BibRef
Verelst, T.[Thomas],
Tuytelaars, T.[Tinne],
Dynamic Convolutions: Exploiting Spatial Sparsity for Faster
Inference,
CVPR20(2317-2326)
IEEE DOI
2008
Graphics processing units, Task analysis, Neural networks,
Tensile stress, Complexity theory, Image coding
BibRef
Goli, N.,
Aamodt, T.M.,
ReSprop: Reuse Sparsified Backpropagation,
CVPR20(1545-1555)
IEEE DOI
2008
Training, Convolution, Acceleration, Convolutional neural networks,
Hardware, Convergence, Correlation
BibRef
Idelbayev, Y.,
Carreira-Perpińán, M.Á.,
Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer,
CVPR20(8046-8056)
IEEE DOI
2008
Neural networks, Training, Cost function, Tensile stress,
Image coding, Matrix decomposition
BibRef
Haroush, M.,
Hubara, I.,
Hoffer, E.,
Soudry, D.,
The Knowledge Within: Methods for Data-Free Model Compression,
CVPR20(8491-8499)
IEEE DOI
2008
Training, Data models, Optimization, Computational modeling,
Calibration, Training data, Degradation
BibRef
Rajagopal, A.[Aditya],
Bouganis, C.S.[Christos-Savvas],
perf4sight: A toolflow to model CNN training performance on Edge GPUs,
ERCVAD21(963-971)
IEEE DOI
2112
BibRef
Earlier:
Now that I can see, I can improve: Enabling data-driven finetuning of
CNNs on the edge,
EDLCV20(3058-3067)
IEEE DOI
2008
Training, Performance evaluation, Adaptation models, Power demand,
Network topology, Memory management, Predictive models.
Data models, Computational modeling, Topology
BibRef
Chatzikonstantinou, C.,
Papadopoulos, G.T.,
Dimitropoulos, K.,
Daras, P.,
Neural Network Compression Using Higher-Order Statistics and
Auxiliary Reconstruction Losses,
EDLCV20(3077-3086)
IEEE DOI
2008
Gaussian distribution, Training, Higher order statistics,
Measurement, Neural networks, Machine learning, Computational complexity
BibRef
Saini, R.,
Jha, N.K.,
Das, B.,
Mittal, S.,
Mohan, C.K.,
ULSAM: Ultra-Lightweight Subspace Attention Module for Compact
Convolutional Neural Networks,
WACV20(1616-1625)
IEEE DOI
2006
Convolution, Computational modeling, Task analysis,
Computational efficiency, Feature extraction, Redundancy, Head
BibRef
Suau, X.,
Zappella, u.,
Apostoloff, N.,
Filter Distillation for Network Compression,
WACV20(3129-3138)
IEEE DOI
2006
Correlation, Training, Tensile stress,
Eigenvalues and eigenfunctions, Image coding, Decorrelation,
Principal component analysis
BibRef
Wang, M.,
Cai, H.,
Huang, X.,
Gong, M.,
ADNet: Adaptively Dense Convolutional Neural Networks,
WACV20(990-999)
IEEE DOI
2006
Adaptation models, Training, Convolution, Task analysis,
Convolutional neural networks, Computational efficiency
BibRef
Hsu, L.,
Chiu, C.,
Lin, K.,
An Energy-Aware Bit-Serial Streaming Deep Convolutional Neural
Network Accelerator,
ICIP19(4609-4613)
IEEE DOI
1910
CNNs, Hardware Accelerator, EnergyAware, Precision, Bit-Serial PE,
Streaming Dataflow
BibRef
Lu, J.[Jing],
Xu, C.F.[Chao-Fan],
Zhang, W.[Wei],
Duan, L.Y.[Ling-Yu],
Mei, T.[Tao],
Sampling Wisely: Deep Image Embedding by Top-K Precision Optimization,
ICCV19(7960-7969)
IEEE DOI
2004
convolutional neural nets, gradient methods, image processing,
learning (artificial intelligence),
Toy manufacturing industry
BibRef
Nascimento, M.G.D.,
Prisacariu, V.,
Fawcett, R.,
DSConv: Efficient Convolution Operator,
ICCV19(5147-5156)
IEEE DOI
2004
convolutional neural nets, neural net architecture,
statistical distributions, DSConv,
Training data
BibRef
Chao, P.,
Kao, C.,
Ruan, Y.,
Huang, C.,
Lin, Y.,
HarDNet: A Low Memory Traffic Network,
ICCV19(3551-3560)
IEEE DOI
2004
feature extraction, image segmentation, neural nets,
object detection, neural network architectures, MACs,
Power demand
BibRef
Chen, Y.,
Fan, H.,
Xu, B.,
Yan, Z.,
Kalantidis, Y.,
Rohrbach, M.,
Shuicheng, Y.,
Feng, J.,
Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural
Networks With Octave Convolution,
ICCV19(3434-3443)
IEEE DOI
2004
convolutional neural nets, feature extraction,
image classification, image resolution, neural net architecture, Kernel
BibRef
Phuong, M.[Mary],
Lampert, C.H.[Christoph H.],
Distillation-Based Training for Multi-Exit Architectures,
ICCV19(1355-1364)
IEEE DOI
2004
To terminate proessing early.
convolutional neural nets, image classification, probability,
supervised learning, training procedure, multiexit architectures,
BibRef
Chen, Y.,
Liu, S.,
Shen, X.,
Jia, J.,
Fast Point R-CNN,
ICCV19(9774-9783)
IEEE DOI
2004
convolutional neural nets, feature extraction,
image representation, object detection, solid modelling,
Detectors
BibRef
Gkioxari, G.,
Johnson, J.,
Malik, J.,
Mesh R-CNN,
ICCV19(9784-9794)
IEEE DOI
2004
computational geometry,
convolutional neural nets, feature extraction, graph theory,
Benchmark testing
BibRef
Vooturi, D.T.[Dharma Teja],
Varma, G.[Girish],
Kothapalli, K.[Kishore],
Dynamic Block Sparse Reparameterization of Convolutional Neural
Networks,
CEFRL19(3046-3053)
IEEE DOI
2004
Code, Convolutional Networks.
WWW Link. convolutional neural nets, image classification,
learning (artificial intelligence), dense neural networks, neural networks
BibRef
Gusak, J.,
Kholiavchenko, M.,
Ponomarev, E.,
Markeeva, L.,
Blagoveschensky, P.,
Cichocki, A.,
Oseledets, I.,
Automated Multi-Stage Compression of Neural Networks,
LPCV19(2501-2508)
IEEE DOI
2004
approximation theory, iterative methods, matrix decomposition,
neural nets, tensors, noniterative ones,
automated
BibRef
Hascoet, T.,
Febvre, Q.,
Zhuang, W.,
Ariki, Y.,
Takiguchi, T.,
Layer-Wise Invertibility for Extreme Memory Cost Reduction of CNN
Training,
NeruArch19(2049-2052)
IEEE DOI
2004
backpropagation, convolutional neural nets,
graphics processing units, minimal training memory consumption,
invertible transformations
BibRef
Ghosh, R.,
Gupta, A.K.,
Motani, M.,
Investigating Convolutional Neural Networks using Spatial Orderness,
NeruArch19(2053-2056)
IEEE DOI
2004
convolutional neural nets, image classification,
statistical analysis, convolutional neural networks, CNN,
Opening the black box of CNNs
BibRef
Cruz Vargas, J.A.[Jesus Adan],
Zamora Esquivel, J.[Julio],
Tickoo, O.[Omesh],
Introducing Region Pooling Learning,
CADL20(714-724).
Springer DOI
2103
BibRef
Esquivel, J.Z.[Julio Zamora],
Cruz Vargas, J.A.[Jesus Adan],
Tickoo, O.[Omesh],
Second Order Bifurcating Methodology for Neural Network Training and
Topology Optimization,
CADL20(725-738).
Springer DOI
2103
BibRef
Zamora Esquivel, J.,
Cruz Vargas, A.,
Lopez Meyer, P.,
Tickoo, O.,
Adaptive Convolutional Kernels,
NeruArch19(1998-2005)
IEEE DOI
2004
computational complexity,
convolutional neural nets, edge detection, feature extraction,
machine learning
BibRef
Köpüklü, O.,
Kose, N.,
Gunduz, A.,
Rigoll, G.,
Resource Efficient 3D Convolutional Neural Networks,
NeruArch19(1910-1919)
IEEE DOI
2004
convolutional neural nets, graphics processing units,
learning (artificial intelligence), UCF-101 dataset,
Action/Activity Recognition
BibRef
Yoo, K.M.,
Jo, H.S.,
Lee, H.,
Han, J.,
Lee, S.,
Stochastic Relational Network,
SDL-CV19(788-792)
IEEE DOI
2004
computational complexity, data visualisation,
inference mechanisms, learning (artificial intelligence),
gradient estimator
BibRef
Rannen-Triki, A.,
Berman, M.,
Kolmogorov, V.,
Blaschko, M.B.,
Function Norms for Neural Networks,
SDL-CV19(748-752)
IEEE DOI
2004
computational complexity, function approximation,
learning (artificial intelligence), neural nets,
Regularization
BibRef
Han, D.,
Yoo, H.,
Direct Feedback Alignment Based Convolutional Neural Network Training
for Low-Power Online Learning Processor,
LPCV19(2445-2452)
IEEE DOI
2004
backpropagation, convolutional neural nets,
learning (artificial intelligence), DFA algorithm, CNN training,
Back propagation
BibRef
Yan, X.P.[Xiao-Peng],
Chen, Z.L.[Zi-Liang],
Xu, A.[Anni],
Wang, X.X.[Xiao-Xi],
Liang, X.D.[Xiao-Dan],
Lin, L.[Liang],
Meta R-CNN: Towards General Solver for Instance-Level Low-Shot
Learning,
ICCV19(9576-9585)
IEEE DOI
2004
Code, Learning.
HTML Version. convolutional neural nets, image representation,
image sampling, image segmentation, Object recognition
BibRef
Dai, X.L.[Xiao-Liang],
Zhang, P.Z.[Pei-Zhao],
Wu, B.[Bichen],
Yin, H.X.[Hong-Xu],
Sun, F.[Fei],
Wang, Y.[Yanghan],
Dukhan, M.[Marat],
Hu, Y.Q.[Yun-Qing],
Wu, Y.M.[Yi-Ming],
Jia, Y.Q.[Yang-Qing],
Vajda, P.[Peter],
Uyttendaele, M.T.[Matt T.],
Jha, N.K.[Niraj K.],
ChamNet: Towards Efficient Network Design Through Platform-Aware Model
Adaptation,
CVPR19(11390-11399).
IEEE DOI
2002
BibRef
Zhang, Y.F.[Yan-Fu],
Gao, S.Q.[Shang-Qian],
Huang, H.[Heng],
Exploration and Estimation for Model Compression,
ICCV21(477-486)
IEEE DOI
2203
Training, Visualization, Heuristic algorithms,
Computational modeling, Estimation, Stochastic processes,
Machine learning architectures and formulations
BibRef
Gao, S.Q.[Shang-Qian],
Deng, C.[Cheng],
Huang, H.[Heng],
Cross Domain Model Compression by Structurally Weight Sharing,
CVPR19(8965-8974).
IEEE DOI
2002
BibRef
Liu, Y.J.[Ya-Jing],
Tian, X.M.[Xin-Mei],
Li, Y.[Ya],
Xiong, Z.W.[Zhi-Wei],
Wu, F.[Feng],
Compact Feature Learning for Multi-Domain Image Classification,
CVPR19(7186-7194).
IEEE DOI
2002
BibRef
Li, J.S.[Jia-Shi],
Qi, Q.[Qi],
Wang, J.Y.[Jing-Yu],
Ge, C.[Ce],
Li, Y.J.[Yu-Jian],
Yue, Z.Z.[Zhang-Zhang],
Sun, H.F.[Hai-Feng],
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural
Networks,
CVPR19(7039-7048).
IEEE DOI
2002
BibRef
Kim, H.[Hyeji],
Khan, M.U.K.[Muhammad Umar Karim],
Kyung, C.M.[Chong-Min],
Efficient Neural Network Compression,
CVPR19(12561-12569).
IEEE DOI
2002
BibRef
Minnehan, B.[Breton],
Savakis, A.[Andreas],
Cascaded Projection: End-To-End Network Compression and Acceleration,
CVPR19(10707-10716).
IEEE DOI
2002
BibRef
Lin, Y.H.[Yu-Hsun],
Chou, C.N.[Chun-Nan],
Chang, E.Y.[Edward Y.],
MBS: Macroblock Scaling for CNN Model Reduction,
CVPR19(9109-9117).
IEEE DOI
2002
BibRef
Gao, Y.[Yuan],
Ma, J.Y.[Jia-Yi],
Zhao, M.B.[Ming-Bo],
Liu, W.[Wei],
Yuille, A.L.[Alan L.],
NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural
Discriminative Dimensionality Reduction,
CVPR19(3200-3209).
IEEE DOI
2002
BibRef
Wang, H.Y.[Hui-Yu],
Kembhavi, A.[Aniruddha],
Farhadi, A.[Ali],
Yuille, A.L.[Alan L.],
Rastegari, M.[Mohammad],
ELASTIC: Improving CNNs With Dynamic Scaling Policies,
CVPR19(2253-2262).
IEEE DOI
2002
BibRef
Yang, H.C.[Hai-Chuan],
Zhu, Y.H.[Yu-Hao],
Liu, J.[Ji],
ECC: Platform-Independent Energy-Constrained Deep Neural Network
Compression via a Bilinear Regression Model,
CVPR19(11198-11207).
IEEE DOI
2002
BibRef
Gong, L.Y.[Li-Yu],
Cheng, Q.A.[Qi-Ang],
Exploiting Edge Features for Graph Neural Networks,
CVPR19(9203-9211).
IEEE DOI
2002
BibRef
Kossaifi, J.[Jean],
Bulat, A.[Adrian],
Tzimiropoulos, G.[Georgios],
Pantic, M.[Maja],
T-Net: Parametrizing Fully Convolutional Nets With a Single High-Order
Tensor,
CVPR19(7814-7823).
IEEE DOI
2002
BibRef
Chen, W.J.[Wei-Jie],
Xie, D.[Di],
Zhang, Y.[Yuan],
Pu, S.L.[Shi-Liang],
All You Need Is a Few Shifts: Designing Efficient Convolutional Neural
Networks for Image Classification,
CVPR19(7234-7243).
IEEE DOI
2002
BibRef
Georgiadis, G.[Georgios],
Accelerating Convolutional Neural Networks via Activation Map
Compression,
CVPR19(7078-7088).
IEEE DOI
2002
BibRef
Zhu, S.L.[Shi-Lin],
Dong, X.[Xin],
Su, H.[Hao],
Binary Ensemble Neural Network: More Bits per Network or More Networks
per Bit?,
CVPR19(4918-4927).
IEEE DOI
2002
BibRef
Li, T.H.[Tuan-Hui],
Wu, B.Y.[Bao-Yuan],
Yang, Y.J.[Yu-Jiu],
Fan, Y.B.[Yan-Bo],
Zhang, Y.[Yong],
Liu, W.[Wei],
Compressing Convolutional Neural Networks via Factorized Convolutional
Filters,
CVPR19(3972-3981).
IEEE DOI
2002
BibRef
Kim, E.[Eunwoo],
Ahn, C.[Chanho],
Torr, P.H.S.[Philip H.S.],
Oh, S.H.[Song-Hwai],
Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks,
CVPR19(2705-2714).
IEEE DOI
2002
BibRef
He, T.[Tong],
Zhang, Z.[Zhi],
Zhang, H.[Hang],
Zhang, Z.Y.[Zhong-Yue],
Xie, J.Y.[Jun-Yuan],
Li, M.[Mu],
Bag of Tricks for Image Classification with Convolutional Neural
Networks,
CVPR19(558-567).
IEEE DOI
2002
BibRef
Wang, X.J.[Xi-Jun],
Kan, M.[Meina],
Shan, S.G.[Shi-Guang],
Chen, X.L.[Xi-Lin],
Fully Learnable Group Convolution for Acceleration of Deep Neural
Networks,
CVPR19(9041-9050).
IEEE DOI
2002
BibRef
Li, Y.C.[Yu-Chao],
Lin, S.H.[Shao-Hui],
Zhang, B.C.[Bao-Chang],
Liu, J.Z.[Jian-Zhuang],
Doermann, D.[David],
Wu, Y.J.[Yong-Jian],
Huang, F.Y.[Fei-Yue],
Ji, R.R.[Rong-Rong],
Exploiting Kernel Sparsity and Entropy for Interpretable CNN
Compression,
CVPR19(2795-2804).
IEEE DOI
2002
BibRef
Zhao, R.[Ritchie],
Hu, Y.W.[Yu-Wei],
Dotzel, J.[Jordan],
de Sa, C.[Christopher],
Zhang, Z.[Zhiru],
Building Efficient Deep Neural Networks With Unitary Group Convolutions,
CVPR19(11295-11304).
IEEE DOI
2002
BibRef
Qiao, S.Y.[Si-Yuan],
Lin, Z.[Zhe],
Zhang, J.M.[Jian-Ming],
Yuille, A.L.[Alan L.],
Neural Rejuvenation: Improving Deep Network Training by Enhancing
Computational Resource Utilization,
CVPR19(61-71).
IEEE DOI
2002
BibRef
Tagaris, T.[Thanos],
Sdraka, M.[Maria],
Stafylopatis, A.[Andreas],
High-Resolution Class Activation Mapping,
ICIP19(4514-4518)
IEEE DOI
1910
Discriminative localization, Class Activation Map, Deep Learning,
Convolutional Neural Networks
BibRef
Lubana, E.S.,
Dick, R.P.,
Aggarwal, V.,
Pradhan, P.M.,
Minimalistic Image Signal Processing for Deep Learning Applications,
ICIP19(4165-4169)
IEEE DOI
1910
Deep learning accelerators, Image signal processor, RAW images, Covariate shift
BibRef
Saha, A.[Avinab],
Ram, K.S.[K. Sai],
Mukhopadhyay, J.[Jayanta],
Das, P.P.[Partha Pratim],
Patra, A.[Amit],
Fitness Based Layer Rank Selection Algorithm for Accelerating CNNs by
Candecomp/Parafac (CP) Decompositions,
ICIP19(3402-3406)
IEEE DOI
1910
CP Decompositions, FLRS, Accelerating CNNs, Rank Selection, Compression
BibRef
Xu, D.,
Lee, M.L.,
Hsu, W.,
Patch-Level Regularizer for Convolutional Neural Network,
ICIP19(3232-3236)
IEEE DOI
1910
BibRef
Kim, M.,
Park, C.,
Kim, S.,
Hong, T.,
Ro, W.W.,
Efficient Dilated-Winograd Convolutional Neural Networks,
ICIP19(2711-2715)
IEEE DOI
1910
Image processing and dilated convolution,
Winograd convolution, neural network, graphics processing unit
BibRef
Saporta, A.,
Chen, Y.,
Blot, M.,
Cord, M.,
Reve: Regularizing Deep Learning with Variational Entropy Bound,
ICIP19(1610-1614)
IEEE DOI
1910
Deep learning, regularization, invariance, information theory,
image understanding
BibRef
Choi, Y.,
Choi, J.,
Moon, H.,
Lee, J.,
Chang, J.,
Accelerating Framework for Simultaneous Optimization of Model
Architectures and Training Hyperparameters,
ICIP19(3831-3835)
IEEE DOI
1910
Deep Learning, Model Hyperparameters
BibRef
Zhe, W.,
Lin, J.,
Chandrasekhar, V.,
Girod, B.,
Optimizing the Bit Allocation for Compression of Weights and
Activations of Deep Neural Networks,
ICIP19(3826-3830)
IEEE DOI
1910
Deep Learning, Coding, Compression
BibRef
Lei, X.,
Liu, L.,
Zhou, Z.,
Sun, H.,
Zheng, N.,
Exploring Hardware Friendly Bottleneck Architecture in CNN for
Embedded Computing Systems,
ICIP19(4180-4184)
IEEE DOI
1910
Lightweight/Mobile CNN model, Model optimization,
Embedded System, Hardware Accelerating.
BibRef
Geng, X.[Xue],
Lin, J.[Jie],
Zhao, B.[Bin],
Kong, A.[Anmin],
Aly, M.M.S.[Mohamed M. Sabry],
Chandrasekhar, V.[Vijay],
Hardware-Aware Softmax Approximation for Deep Neural Networks,
ACCV18(IV:107-122).
Springer DOI
1906
BibRef
Groh, F.[Fabian],
Wieschollek, P.[Patrick],
Lensch, H.P.A.[Hendrik P. A.],
Flex-Convolution,
ACCV18(I:105-122).
Springer DOI
1906
BibRef
Yang, L.[Lu],
Song, Q.[Qing],
Li, Z.X.[Zuo-Xin],
Wu, Y.Q.[Ying-Qi],
Li, X.J.[Xiao-Jie],
Hu, M.J.[Meng-Jie],
Cross Connected Network for Efficient Image Recognition,
ACCV18(I:56-71).
Springer DOI
1906
BibRef
Ignatov, A.[Andrey],
Timofte, R.[Radu],
Chou, W.[William],
Wang, K.[Ke],
Wu, M.[Max],
Hartley, T.[Tim],
Van Gool, L.J.[Luc J.],
AI Benchmark: Running Deep Neural Networks on Android Smartphones,
PerceptualRest18(V:288-314).
Springer DOI
1905
BibRef
Li, X.,
Zhang, S.,
Jiang, B.,
Qi, Y.,
Chuah, M.C.,
Bi, N.,
DAC: Data-Free Automatic Acceleration of Convolutional Networks,
WACV19(1598-1606)
IEEE DOI
1904
convolutional neural nets, image classification,
Internet of Things, learning (artificial intelligence),
Deep learning
BibRef
He, Y.,
Liu, X.,
Zhong, H.,
Ma, Y.,
AddressNet: Shift-Based Primitives for Efficient Convolutional Neural
Networks,
WACV19(1213-1222)
IEEE DOI
1904
convolutional neural nets, coprocessors,
learning (artificial intelligence), parallel algorithms,
Fuses
BibRef
He, Z.Z.[Zhe-Zhi],
Gong, B.Q.[Bo-Qing],
Fan, D.L.[De-Liang],
Optimize Deep Convolutional Neural Network with Ternarized Weights
and High Accuracy,
WACV19(913-921)
IEEE DOI
1904
reduce to -1, 0, +1.
convolutional neural nets, embedded systems,
image classification, image coding, image representation,
Hardware
BibRef
Bicici, U.C.[Ufuk Can],
Keskin, C.[Cem],
Akarun, L.[Lale],
Conditional Information Gain Networks,
ICPR18(1390-1395)
IEEE DOI
1812
Decision trees, Neural networks, Computational modeling, Training,
Routing, Vegetation, Probability distribution
BibRef
Aldana, R.[Rodrigo],
Campos-Macías, L.[Leobardo],
Zamora, J.[Julio],
Gomez-Gutierrez, D.[David],
Cruz, A.[Adan],
Dynamic Learning Rate for Neural Networks:
A Fixed-Time Stability Approach,
ICPR18(1378-1383)
IEEE DOI
1812
Training, Artificial neural networks, Approximation algorithms,
Optimization, Heuristic algorithms, Lyapunov methods
BibRef
Kung, H.T.,
McDanel, B.,
Zhang, S.Q.,
Adaptive Tiling: Applying Fixed-size Systolic Arrays To Sparse
Convolutional Neural Networks,
ICPR18(1006-1011)
IEEE DOI
1812
Sparse matrices, Arrays, Convolution, Adaptive arrays,
Microprocessors, Adaptation models
BibRef
Grelsson, B.,
Felsberg, M.,
Improved Learning in Convolutional Neural Networks with Shifted
Exponential Linear Units (ShELUs),
ICPR18(517-522)
IEEE DOI
1812
convolution, feedforward neural nets, learning (artificial intelligence).
BibRef
Zheng, W.,
Zhang, Z.,
Accelerating the Classification of Very Deep Convolutional Network by
A Cascading Approach,
ICPR18(355-360)
IEEE DOI
1812
computational complexity, convolution, entropy,
feedforward neural nets, image classification,
Measurement uncertainty
BibRef
Zhong, G.,
Yao, H.,
Zhou, H.,
Merging Neurons for Structure Compression of Deep Networks,
ICPR18(1462-1467)
IEEE DOI
1812
Neurons, Neural networks, Merging,
Matrix decomposition, Mathematical model, Prototypes
BibRef
Bhowmik, P.[Pankaj],
Pantho, M.J.H.[M. Jubaer Hossain],
Asadinia, M.[Marjan],
Bobda, C.[Christophe],
Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic
Architecture for Smart Image Sensor,
ECVW18(786-7868)
IEEE DOI
1812
Image sensors, Visualization,
Program processors, Clocks, Image processing
BibRef
Aggarwal, V.[Vaneet],
Wang, W.L.[Wen-Lin],
Eriksson, B.[Brian],
Sun, Y.F.[Yi-Fan],
Wan, W.Q.[Wen-Qi],
Wide Compression: Tensor Ring Nets,
CVPR18(9329-9338)
IEEE DOI
1812
Neural networks, Image coding, Shape, Merging, Computer architecture
BibRef
Ren, M.Y.[Meng-Ye],
Pokrovsky, A.[Andrei],
Yang, B.[Bin],
Urtasun, R.[Raquel],
SBNet: Sparse Blocks Network for Fast Inference,
CVPR18(8711-8720)
IEEE DOI
1812
Convolution, Kernel, Shape, Object detection, Task analysis
BibRef
Xie, G.T.[Guo-Tian],
Wang, J.D.[Jing-Dong],
Zhang, T.[Ting],
Lai, J.H.[Jian-Huang],
Hong, R.C.[Ri-Chang],
Qi, G.J.[Guo-Jun],
Interleaved Structured Sparse Convolutional Neural Networks,
CVPR18(8847-8856)
IEEE DOI
1812
Convolution, Kernel, Sparse matrices, Redundancy,
Computational modeling, Computational complexity
BibRef
Kim, E.[Eunwoo],
Ahn, C.[Chanho],
Oh, S.H.[Song-Hwai],
NestedNet: Learning Nested Sparse Structures in Deep Neural Networks,
CVPR18(8669-8678)
IEEE DOI
1812
Task analysis, Knowledge engineering, Neural networks,
Optimization, Redundancy
BibRef
Bulň, S.R.[Samuel Rota],
Porzi, L.[Lorenzo],
Kontschieder, P.[Peter],
In-place Activated BatchNorm for Memory-Optimized Training of DNNs,
CVPR18(5639-5647)
IEEE DOI
1812
Reduce memory needs.
Training, Buffer storage, Checkpointing, Memory management,
Standards, Semantics
BibRef
Zhang, D.,
clcNet: Improving the Efficiency of Convolutional Neural Network
Using Channel Local Convolutions,
CVPR18(7912-7919)
IEEE DOI
1812
Kernel, Computational modeling, Computational efficiency,
Convolutional neural networks, Stacking
BibRef
Kuen, J.,
Kong, X.,
Lin, Z.,
Wang, G.,
Yin, J.,
See, S.,
Tan, Y.,
Stochastic Downsampling for Cost-Adjustable Inference and Improved
Regularization in Convolutional Networks,
CVPR18(7929-7938)
IEEE DOI
1812
Training, Computational modeling, Computational efficiency,
Stochastic processes, Visualization, Network architecture
BibRef
Shazeer, N.,
Fatahalian, K.,
Mark, W.R.,
Mullapudi, R.T.,
HydraNets: Specialized Dynamic Architectures for Efficient Inference,
CVPR18(8080-8089)
IEEE DOI
1812
Training, Computational modeling,
Task analysis, Computational efficiency, Optimization, Routing
BibRef
Rebuffi, S.,
Vedaldi, A.,
Bilen, H.,
Efficient Parametrization of Multi-domain Deep Neural Networks,
CVPR18(8119-8127)
IEEE DOI
1812
Task analysis, Neural networks, Adaptation models,
Feature extraction, Visualization, Computational modeling, Standards
BibRef
Cao, S.[Sen],
Liu, Y.Z.[Ya-Zhou],
Zhou, C.X.[Chang-Xin],
Sun, Q.S.[Quan-Sen],
Pongsak, L.S.[La-Sang],
Shen, S.M.[Sheng Mei],
ThinNet: An Efficient Convolutional Neural Network for Object
Detection,
ICPR18(836-841)
IEEE DOI
1812
Convolution, Computational modeling, Object detection,
Neural networks, Training,
ThinNet
BibRef
Kobayashi, T.[Takumi],
t-vMF Similarity For Regularizing Intra-Class Feature Distribution,
CVPR21(6612-6621)
IEEE DOI
2111
WWW Link.
Code, Training. Training, Computational modeling, Focusing,
Noise measurement, Convolutional neural networks
BibRef
Kobayashi, T.[Takumi],
Analyzing Filters Toward Efficient ConvNet,
CVPR18(5619-5628)
IEEE DOI
1812
Convolution, Feature extraction, Neurons, Image reconstruction,
Visualization, Shape
BibRef
Chou, Y.,
Chan, Y.,
Lee, J.,
Chiu, C.,
Chen, C.,
Merging Deep Neural Networks for Mobile Devices,
EfficientDeep18(1767-17678)
IEEE DOI
1812
Task analysis, Convolution, Merging, Computational modeling,
Neural networks, Kernel, Computer architecture
BibRef
Ma, N.N.[Ning-Ning],
Zhang, X.Y.[Xiang-Yu],
Zheng, H.T.[Hai-Tao],
Sun, J.[Jian],
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
Design,
ECCV18(XIV: 122-138).
Springer DOI
1810
BibRef
Zhang, X.Y.[Xiang-Yu],
Zhou, X.,
Lin, M.,
Sun, J.,
ShuffleNet: An Extremely Efficient Convolutional Neural Network for
Mobile Devices,
CVPR18(6848-6856)
IEEE DOI
1812
Convolution, Complexity theory,
Mobile handsets, Computational modeling, Task analysis, Neural networks
BibRef
Prabhu, A.[Ameya],
Varma, G.[Girish],
Namboodiri, A.[Anoop],
Deep Expander Networks: Efficient Deep Networks from Graph Theory,
ECCV18(XIII: 20-36).
Springer DOI
1810
BibRef
Freeman, I.,
Roese-Koerner, L.,
Kummert, A.,
Effnet: An Efficient Structure for Convolutional Neural Networks,
ICIP18(6-10)
IEEE DOI
1809
Convolution, Computational modeling, Optimization, Hardware, Kernel,
Data compression, Convolutional neural networks,
real-time inference
BibRef
Elordi, U.[Unai],
Unzueta, L.[Luis],
Arganda-Carreras, I.[Ignacio],
Otaegui, O.[Oihana],
How Can Deep Neural Networks Be Generated Efficiently for Devices with
Limited Resources?,
AMDO18(24-33).
Springer DOI
1807
BibRef
Lee, T.K.[Tae Kwan],
Baddar, W.J.[Wissam J.],
Kim, S.T.[Seong Tae],
Ro, Y.M.[Yong Man],
Convolution with Logarithmic Filter Groups for Efficient Shallow CNN,
MMMod18(I:117-129).
Springer DOI
1802
filter grouping in convolution layers.
BibRef
Véniat, T.[Tom],
Denoyer, L.[Ludovic],
Learning Time/Memory-Efficient Deep Architectures with Budgeted Super
Networks,
CVPR18(3492-3500)
IEEE DOI
1812
Computational modeling,
Stochastic processes, Neural networks, Fabrics, Predictive models
BibRef
Huang, G.[Gao],
Liu, Z.[Zhuang],
van der Maaten, L.[Laurens],
Weinberger, K.Q.[Kilian Q.],
Densely Connected Convolutional Networks,
CVPR17(2261-2269)
IEEE DOI
1711
Award, CVPR. Convolution, Convolutional codes, Network architecture,
Neural networks, Road transportation, Training
BibRef
Huang, G.[Gao],
Sun, Y.[Yu],
Liu, Z.[Zhuang],
Sedra, D.[Daniel],
Weinberger, K.Q.[Kilian Q.],
Deep Networks with Stochastic Depth,
ECCV16(IV: 646-661).
Springer DOI
1611
BibRef
Huang, G.[Gao],
Liu, S.C.[Shi-Chen],
van der Maaten, L.[Laurens],
Weinberger, K.Q.[Kilian Q.],
CondenseNet: An Efficient DenseNet Using Learned Group Convolutions,
CVPR18(2752-2761)
IEEE DOI
1812
CNN on a phone.
Training, Computational modeling, Standards,
Mobile handsets, Network architecture, Indexes
BibRef
Zhao, G.,
Zhang, Z.,
Guan, H.,
Tang, P.,
Wang, J.,
Rethinking ReLU to Train Better CNNs,
ICPR18(603-608)
IEEE DOI
1812
Convolution, Tensile stress, Network architecture,
Computational efficiency, Computational modeling
BibRef
Chan, M.,
Scarafoni, D.,
Duarte, R.,
Thornton, J.,
Skelly, L.,
Learning Network Architectures of Deep CNNs Under Resource
Constraints,
EfficientDeep18(1784-17847)
IEEE DOI
1812
Computational modeling, Optimization,
Adaptation models, Network architecture, Linear programming, Training
BibRef
Bhagoji, A.N.[Arjun Nitin],
He, W.[Warren],
Li, B.[Bo],
Song, D.[Dawn],
Practical Black-Box Attacks on Deep Neural Networks Using Efficient
Query Mechanisms,
ECCV18(XII: 158-174).
Springer DOI
1810
BibRef
Kuen, J.[Jason],
Kong, X.F.[Xiang-Fei],
Wang, G.[Gang],
Tan, Y.P.[Yap-Peng],
DelugeNets: Deep Networks with Efficient and Flexible Cross-Layer
Information Inflows,
CEFR-LCV17(958-966)
IEEE DOI
1802
Complexity theory, Computational modeling,
Convolution, Correlation, Neural networks
BibRef
Singh, A.,
Kingsbury, N.G.,
Efficient Convolutional Network Learning Using Parametric Log Based
Dual-Tree Wavelet ScatterNet,
CEFR-LCV17(1140-1147)
IEEE DOI
1802
Feature extraction,
Personal area networks, Standards, Training
BibRef
Liu, Z.,
Li, J.,
Shen, Z.,
Huang, G.,
Yan, S.,
Zhang, C.,
Learning Efficient Convolutional Networks through Network Slimming,
ICCV17(2755-2763)
IEEE DOI
1802
convolution, image classification,
learning (artificial intelligence), neural nets, CNNs,
Training
BibRef
Ioannou, Y.,
Robertson, D.,
Cipolla, R.,
Criminisi, A.,
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups,
CVPR17(5977-5986)
IEEE DOI
1711
Computational complexity, Computational modeling,
Convolution, Graphics processing units,
Neural networks, Training
BibRef
Lin, J.H.,
Xing, T.,
Zhao, R.,
Zhang, Z.,
Srivastava, M.,
Tu, Z.,
Gupta, R.K.,
Binarized Convolutional Neural Networks with Separable Filters for
Efficient Hardware Acceleration,
ECVW17(344-352)
IEEE DOI
1709
Backpropagation, Convolution, Field programmable gate arrays,
Filtering theory, Hardware, Kernel, Training
BibRef
Zhang, X.,
Li, Z.,
Loy, C.C.,
Lin, D.,
PolyNet: A Pursuit of Structural Diversity in Very Deep Networks,
CVPR17(3900-3908)
IEEE DOI
1711
Agriculture, Benchmark testing, Computational efficiency,
Diversity reception, Network architecture, Systematics, Training
BibRef
Yan, S.,
Keynotes: Deep learning for visual understanding:
Effectiveness vs. efficiency,
VCIP16(1-1)
IEEE DOI
1701
BibRef
Karmakar, P.,
Teng, S.W.,
Zhang, D.,
Liu, Y.,
Lu, G.,
Improved Tamura Features for Image Classification Using Kernel Based
Descriptors,
DICTA17(1-7)
IEEE DOI
1804
BibRef
And:
Improved Kernel Descriptors for Effective and Efficient Image
Classification,
DICTA17(1-8)
IEEE DOI
1804
BibRef
Earlier:
Combining Pyramid Match Kernel and Spatial Pyramid for Image
Classification,
DICTA16(1-8)
IEEE DOI
1701
Gabor filters,
image colour analysis, image segmentation.
feature extraction, image classification, image colour analysis,
image representation, effective image classification,
BibRef
Karmakar, P.,
Teng, S.W.,
Lu, G.,
Zhang, D.,
Rotation Invariant Spatial Pyramid Matching for Image Classification,
DICTA15(1-8)
IEEE DOI
1603
image classification
BibRef
Opitz, M.[Michael],
Possegger, H.[Horst],
Bischof, H.[Horst],
Efficient Model Averaging for Deep Neural Networks,
ACCV16(II: 205-220).
Springer DOI
1704
BibRef
Zhang, Z.M.[Zi-Ming],
Chen, Y.T.[Yu-Ting],
Saligrama, V.[Venkatesh],
Efficient Training of Very Deep Neural Networks for Supervised
Hashing,
CVPR16(1487-1495)
IEEE DOI
1612
BibRef
Smith, L.N.,
Cyclical Learning Rates for Training Neural Networks,
WACV17(464-472)
IEEE DOI
1609
Computational efficiency, Neural networks,
Schedules, Training, Tuning
BibRef
Cardona-Escobar, A.F.[Andrés F.],
Giraldo-Forero, A.F.[Andrés F.],
Castro-Ospina, A.E.[Andrés E.],
Jaramillo-Garzón, J.A.[Jorge A.],
Efficient Hyperparameter Optimization in Convolutional Neural Networks
by Learning Curves Prediction,
CIARP17(143-151).
Springer DOI
1802
BibRef
Smith, L.N.,
Hand, E.M.,
Doster, T.,
Gradual DropIn of Layers to Train Very Deep Neural Networks,
CVPR16(4763-4771)
IEEE DOI
1612
BibRef
Pasquet, J.,
Chaumont, M.,
Subsol, G.,
Derras, M.,
Speeding-up a convolutional neural network by connecting an SVM
network,
ICIP16(2286-2290)
IEEE DOI
1610
Computational efficiency
BibRef
Park, W.S.,
Kim, M.,
CNN-based in-loop filtering for coding efficiency improvement,
IVMSP16(1-5)
IEEE DOI
1608
Convolution
BibRef
Moons, B.[Bert],
de Brabandere, B.[Bert],
Van Gool, L.J.[Luc J.],
Verhelst, M.[Marian],
Energy-efficient ConvNets through approximate computing,
WACV16(1-8)
IEEE DOI
1606
Approximation algorithms
BibRef
Li, N.,
Takaki, S.,
Tomiokay, Y.,
Kitazawa, H.,
A multistage dataflow implementation of a Deep Convolutional Neural
Network based on FPGA for high-speed object recognition,
Southwest16(165-168)
IEEE DOI
1605
Acceleration
BibRef
Hsu, F.C.,
Gubbi, J.,
Palaniswami, M.,
Learning Efficiently- The Deep CNNs-Tree Network,
DICTA15(1-7)
IEEE DOI
1603
learning (artificial intelligence)
BibRef
Highlander, T.[Tyler],
Rodriguez, A.[Andres],
Very Efficient Training of Convolutional Neural Networks using Fast
Fourier Transform and Overlap-and-Add,
BMVC15(xx-yy).
DOI Link
1601
BibRef
Zou, X.Y.[Xiao-Yi],
Xu, X.M.[Xiang-Min],
Qing, C.M.[Chun-Mei],
Xing, X.F.[Xiao-Fen],
High speed deep networks based on Discrete Cosine Transformation,
ICIP14(5921-5925)
IEEE DOI
1502
Accuracy
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Pruning .