14.5.10.8.10 Neural Net Quantization

Chapter Contents (Back)
CNN. Efficient Implementation. Quantization.

Dong, Y.P.[Yin-Peng], Ni, R.K.[Ren-Kun], Li, J.G.[Jian-Guo], Chen, Y.R.[Yu-Rong], Su, H.[Hang], Zhu, J.[Jun],
Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks,
IJCV(127), No. 11-12, December 2019, pp. 1629-1642.
Springer DOI 1911
BibRef

Zhou, A.[Aojun], Yao, A.B.[An-Bang], Wang, K.[Kuan], Chen, Y.R.[Yu-Rong],
Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks,
CVPR18(9426-9435)
IEEE DOI 1812
Pattern recognition BibRef

Zhou, Z.G.[Zheng-Guang], Zhou, W.G.[Wen-Gang], Lv, X.T.[Xu-Tao], Huang, X.[Xuan], Wang, X.Y.[Xiao-Yu], Li, H.Q.[Hou-Qiang],
Progressive Learning of Low-Precision Networks for Image Classification,
MultMed(23), 2021, pp. 871-882.
IEEE DOI 2103
Quantization (signal), Training, Neural networks, Convolution, Acceleration, Task analysis, Complexity theory, image classification BibRef

Chu, T.S.[Tian-Shu], Luo, Q.[Qin], Yang, J.[Jie], Huang, X.L.[Xiao-Lin],
Mixed-precision quantized neural networks with progressively decreasing bitwidth,
PR(111), 2021, pp. 107647.
Elsevier DOI 2012
Model compression, Quantized neural networks, Mixed-precision BibRef

Zhuang, B.[Bohan], Tan, M.K.[Ming-Kui], Liu, J.[Jing], Liu, L.Q.[Ling-Qiao], Reid, I.D.[Ian D.], Shen, C.H.[Chun-Hua],
Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations,
PAMI(44), No. 10, October 2022, pp. 6140-6152.
IEEE DOI 2209
BibRef
Earlier: A1, A6, A2, A3, A5, Only:
Towards Effective Low-Bitwidth Convolutional Neural Networks,
CVPR18(7920-7928)
IEEE DOI 1812
Training, Quantization (signal), Neural networks, Stochastic processes, Numerical models, Knowledge engineering, image classification. Optimization, Hardware, Convolution BibRef

Liu, J.[Jing], Zhuang, B.[Bohan], Chen, P.[Peng], Shen, C.H.[Chun-Hua], Cai, J.F.[Jian-Fei], Tan, M.K.[Ming-Kui],
Single-Path Bit Sharing for Automatic Loss-Aware Model Compression,
PAMI(45), No. 10, October 2023, pp. 12459-12473.
IEEE DOI 2310
BibRef

Wu, R.[Ran], Liu, H.Y.[Huan-Yu], Li, J.B.[Jun-Bao],
Adaptive gradients and weight projection based on quantized neural networks for efficient image classification,
CVIU(223), 2022, pp. 103516.
Elsevier DOI 2210
Quantization, Deep projection, Adaptive gradients, High dimensional training space BibRef

Wang, P.S.[Pei-Song], Chen, W.H.[Wei-Han], He, X.Y.[Xiang-Yu], Chen, Q.[Qiang], Liu, Q.S.[Qing-Shan], Cheng, J.[Jian],
Optimization-Based Post-Training Quantization With Bit-Split and Stitching,
PAMI(45), No. 2, February 2023, pp. 2119-2135.
IEEE DOI 2301
Quantization (signal), Training, Tensors, Optimization, Network architecture, Degradation, Task analysis, post-training quantization BibRef

Li, Z.[Zefan], Ni, B.B.[Bing-Bing], Yang, X.K.[Xiao-Kang], Zhang, W.J.[Wen-Jun], Gao, W.[Wen],
Residual Quantization for Low Bit-Width Neural Networks,
MultMed(25), 2023, pp. 214-227.
IEEE DOI 2301
Quantization (signal), Training, Computational modeling, Neurons, Degradation, Task analysis, Optimization, Deep learning, network acceleration BibRef

Sharma, P.K.[Prasen Kumar], Abraham, A.[Arun], Rajendiran, V.N.[Vikram Nelvoy],
A Generalized Zero-Shot Quantization of Deep Convolutional Neural Networks Via Learned Weights Statistics,
MultMed(25), 2023, pp. 953-965.
IEEE DOI 2303
Quantization (signal), Training, Data models, Tensors, Calibration, Computational modeling, Convolutional neural networks, post-training quantization BibRef

Xu, S.K.[Shou-Kai], Zhang, S.H.[Shu-Hai], Liu, J.[Jing], Zhuang, B.[Bohan], Wang, Y.W.[Yao-Wei], Tan, M.K.[Ming-Kui],
Generative Data Free Model Quantization With Knowledge Matching for Classification,
CirSysVideo(33), No. 12, December 2023, pp. 7296-7309.
IEEE DOI Code:
WWW Link. 2312
BibRef

Kazemi, E.[Ehsan], Taherkhani, F.[Fariborz], Wang, L.Q.[Li-Qiang],
On complementing unsupervised learning with uncertainty quantification,
PRL(176), 2023, pp. 69-75.
Elsevier DOI 2312
Uncertainty quantification, Semi-supervised learning, Approximate Bayesian models, Confirmation bias BibRef

Sun, S.Z.[Shu-Zhou], Xu, H.[Huali], Li, Y.[Yan], Li, P.[Ping], Sheng, B.[Bin], Lin, X.[Xiao],
FastAL: Fast Evaluation Module for Efficient Dynamic Deep Active Learning Using Broad Learning System,
CirSysVideo(34), No. 2, February 2024, pp. 815-827.
IEEE DOI 2402
Data models, Training, Uncertainty, Learning systems, Frequency-domain analysis, Costs, Predictive models, deep learning BibRef


van den Dool, W.[Winfried], Blankevoort, T.[Tijmen], Welling, M.[Max], Asano, Y.M.[Yuki M.],
Efficient Neural PDE-Solvers using Quantization Aware Training,
REDLCV23(1415-1424)
IEEE DOI 2401
BibRef

Abati, D.[Davide], Ben Yahia, H.[Haitam], Nagel, M.[Markus], Habibian, A.[Amirhossein],
ResQ: Residual Quantization for Video Perception,
ICCV23(17073-17083)
IEEE DOI 2401
BibRef

Chauhan, A.[Arun], Tiwari, U.[Utsav], Vikram, N.R.,
Post Training Mixed Precision Quantization of Neural Networks using First-Order Information,
REDLCV23(1335-1344)
IEEE DOI 2401
BibRef

Pandey, N.P.[Nilesh Prasad], Fournarakis, M.[Marios], Patel, C.[Chirag], Nagel, M.[Markus],
Softmax Bias Correction for Quantized Generative Models,
REDLCV23(1445-1450)
IEEE DOI 2401
BibRef

Shang, Y.Z.[Yu-Zhang], Xu, B.X.[Bing-Xin], Liu, G.[Gaowen], Kompella, R.R.[Ramana Rao], Yan, Y.[Yan],
Causal-DFQ: Causality Guided Data-free Network Quantization,
ICCV23(17391-17400)
IEEE DOI 2401
BibRef

Xu, K.[Ke], Han, L.[Lei], Tian, Y.[Ye], Yang, S.S.[Shang-Shang], Zhang, X.Y.[Xing-Yi],
EQ-Net: Elastic Quantization Neural Networks,
ICCV23(1505-1514)
IEEE DOI Code:
WWW Link. 2401
BibRef

Wu, H.M.[Hui-Min], Lei, C.Y.[Chen-Yang], Sun, X.[Xiao], Wang, P.S.[Peng-Shuai], Chen, Q.F.[Qi-Feng], Cheng, K.T.[Kwang-Ting], Lin, S.[Stephen], Wu, Z.R.[Zhi-Rong],
Randomized Quantization: A Generic Augmentation for Data Agnostic Self-supervised Learning,
ICCV23(16259-16270)
IEEE DOI Code:
WWW Link. 2401
BibRef

Li, T.X.[Tian-Xiang], Chen, B.[Bin], Wang, Q.W.[Qian-Wei], Huang, Y.J.[Yu-Jun], Xia, S.T.[Shu-Tao],
LKBQ: Pushing the Limit of Post-Training Quantization to Extreme 1 bit,
ICIP23(1775-1779)
IEEE DOI 2312
BibRef

Yvinec, E.[Edouard], Dapogny, A.[Arnaud], Bailly, K.[Kevin],
Designing Strong Baselines for Ternary Neural Network Quantization through Support and Mass Equalization,
ICIP23(540-544)
IEEE DOI 2312
BibRef

Jeon, Y.[Yongkweon], Lee, C.[Chungman], Kim, H.Y.[Ho-Young],
Genie: Show Me the Data for Quantization,
CVPR23(12064-12073)
IEEE DOI 2309
BibRef

Liu, J.W.[Jia-Wei], Niu, L.[Lin], Yuan, Z.H.[Zhi-Hang], Yang, D.W.[Da-Wei], Wang, X.G.[Xing-Gang], Liu, W.Y.[Wen-Yu],
PD-Quant: Post-Training Quantization Based on Prediction Difference Metric,
CVPR23(24427-24437)
IEEE DOI 2309
BibRef

Noh, H.C.[Hae-Chan], Hyun, S.[Sangeek], Jeong, W.[Woojin], Lim, H.S.[Han-Shin], Heo, J.P.[Jae-Pil],
Disentangled Representation Learning for Unsupervised Neural Quantization,
CVPR23(12001-12010)
IEEE DOI 2309
BibRef

Eliezer, N.S.[Nurit Spingarn], Banner, R.[Ron], Ben-Yaakov, H.[Hilla], Hoffer, E.[Elad], Michaeli, T.[Tomer],
Power Awareness in Low Precision Neural Networks,
CADK22(67-83).
Springer DOI 2304
BibRef

Finkelstein, A.[Alex], Fuchs, E.[Ella], Tal, I.[Idan], Grobman, M.[Mark], Vosco, N.[Niv], Meller, E.[Eldad],
QFT: Post-training Quantization via Fast Joint Finetuning of All Degrees of Freedom,
CADK22(115-129).
Springer DOI 2304
BibRef

Ben-Moshe, L.[Lior], Benaim, S.[Sagie], Wolf, L.B.[Lior B.],
FewGAN: Generating from the Joint Distribution of a Few Images,
ICIP22(751-755)
IEEE DOI 2211
Training, Quantization (signal), Image coding, Semantics, Task analysis, GANs, Few-Shot learning, Quantization BibRef

Tonin, M.[Marcos], de Queiroz, R.L.[Ricardo L.],
On Quantization of Image Classification Neural Networks for Compression Without Retraining,
ICIP22(916-920)
IEEE DOI 2211
Quantization (signal), Image coding, Laplace equations, Transform coding, Artificial neural networks, Entropy, Standards, ONNX file compression BibRef

Cao, Y.H.[Yun-Hao], Sun, P.Q.[Pei-Qin], Huang, Y.C.[Ye-Chang], Wu, J.X.[Jian-Xin], Zhou, S.C.[Shu-Chang],
Synergistic Self-supervised and Quantization Learning,
ECCV22(XXX:587-604).
Springer DOI 2211
BibRef

Zhu, Y.[Ye], Olszewski, K.[Kyle], Wu, Y.[Yu], Achlioptas, P.[Panos], Chai, M.L.[Meng-Lei], Yan, Y.[Yan], Tulyakov, S.[Sergey],
Quantized GAN for Complex Music Generation from Dance Videos,
ECCV22(XXXVII:182-199).
Springer DOI 2211
BibRef

Oh, S.[Sangyun], Sim, H.[Hyeonuk], Kim, J.[Jounghyun], Lee, J.[Jongeun],
Non-uniform Step Size Quantization for Accurate Post-training Quantization,
ECCV22(XI:658-673).
Springer DOI 2211
BibRef

Chikin, V.[Vladimir], Solodskikh, K.[Kirill], Zhelavskaya, I.[Irina],
Explicit Model Size Control and Relaxation via Smooth Regularization for Mixed-Precision Quantization,
ECCV22(XII:1-16).
Springer DOI 2211
BibRef

Kim, H.B.[Han-Byul], Park, E.[Eunhyeok], Yoo, S.[Sungjoo],
BASQ: Branch-wise Activation-clipping Search Quantization for Sub-4-bit Neural Networks,
ECCV22(XII:17-33).
Springer DOI 2211
BibRef

Youn, J.[Jiseok], Song, J.H.[Jae-Hun], Kim, H.S.[Hyung-Sin], Bahk, S.[Saewoong],
Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning Approach,
ECCV22(XII:208-224).
Springer DOI 2211
BibRef

Tang, C.[Chen], Ouyang, K.[Kai], Wang, Z.[Zhi], Zhu, Y.F.[Yi-Fei], Ji, W.[Wen], Wang, Y.W.[Yao-Wei], Zhu, W.W.[Wen-Wu],
Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance,
ECCV22(XI:259-275).
Springer DOI 2211
BibRef

Solodskikh, K.[Kirill], Chikin, V.[Vladimir], Aydarkhanov, R.[Ruslan], Song, D.H.[De-Hua], Zhelavskaya, I.[Irina], Wei, J.S.[Jian-Sheng],
Towards Accurate Network Quantization with Equivalent Smooth Regularizer,
ECCV22(XI:727-742).
Springer DOI 2211
BibRef

Jin, G.J.[Gao-Jie], Yi, X.P.[Xin-Ping], Huang, W.[Wei], Schewe, S.[Sven], Huang, X.W.[Xiao-Wei],
Enhancing Adversarial Training with Second-Order Statistics of Weights,
CVPR22(15252-15262)
IEEE DOI 2210
Training, Deep learning, Correlation, Perturbation methods, Neural networks, Estimation, Robustness, Optimization methods BibRef

Zhu, X.S.[Xiao-Su], Song, J.K.[Jing-Kuan], Gao, L.L.[Lian-Li], Zheng, F.[Feng], Shen, H.T.[Heng Tao],
Unified Multivariate Gaussian Mixture for Efficient Neural Image Compression,
CVPR22(17591-17600)
IEEE DOI 2210
Visualization, Image coding, Codes, Vector quantization, Redundancy, Rate-distortion, Rate distortion theory, Low-level vision, Representation learning BibRef

Zhong, Y.S.[Yun-Shan], Lin, M.B.[Ming-Bao], Nan, G.R.[Gong-Rui], Liu, J.Z.[Jian-Zhuang], Zhang, B.C.[Bao-Chang], Tian, Y.H.[Yong-Hong], Ji, R.R.[Rong-Rong],
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization,
CVPR22(12329-12338)
IEEE DOI 2210
Technological innovation, Quantization (signal), Codes, Computational modeling, Neural networks, Pattern recognition, Efficient learning and inferences BibRef

Liu, Z.H.[Zhen-Hua], Wang, Y.H.[Yun-He], Han, K.[Kai], Ma, S.W.[Si-Wei], Gao, W.[Wen],
Instance-Aware Dynamic Neural Network Quantization,
CVPR22(12424-12433)
IEEE DOI 2210
Deep learning, Quantization (signal), Image recognition, Costs, Neural networks, Termination of employment, Network architecture, Deep learning architectures and techniques BibRef

Chikin, V.[Vladimir], Kryzhanovskiy, V.[Vladimir],
Channel Balancing for Accurate Quantization of Winograd Convolutions,
CVPR22(12497-12506)
IEEE DOI 2210
Training, Deep learning, Quantization (signal), Tensors, Convolution, Optimization methods, Filtering algorithms BibRef

Pandey, D.S.[Deep Shankar], Yu, Q.[Qi],
Multidimensional Belief Quantification for Label-Efficient Meta-Learning,
CVPR22(14371-14380)
IEEE DOI 2210
Training, Uncertainty, Computational modeling, Measurement uncertainty, Predictive models, Pattern recognition, Self- semi- meta- unsupervised learning BibRef

Liu, H.Y.[Hong-Yang], Elkerdawy, S.[Sara], Ray, N.[Nilanjan], Elhoushi, M.[Mostafa],
Layer Importance Estimation with Imprinting for Neural Network Quantization,
MAI21(2408-2417)
IEEE DOI 2109
Training, Quantization (signal), Search methods, Neural networks, Estimation, Reinforcement learning, Pattern recognition BibRef

Yun, S.[Stone], Wong, A.[Alexander],
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of Quantization on Depthwise Separable Convolutional Networks Through the Eyes of Multi-scale Distributional Dynamics,
MAI21(2447-2456)
IEEE DOI 2109
Degradation, Training, Quantization (signal), Systematics, Fluctuations, Dynamic range, Robustness BibRef

Fournarakis, M.[Marios], Nagel, M.[Markus],
In-Hindsight Quantization Range Estimation for Quantized Training,
ECV21(3057-3064)
IEEE DOI 2109
Training, Quantization (signal), Tensors, Neural networks, Estimation, Dynamic range, Benchmark testing BibRef

Yu, H.C.[Hai-Chao], Yang, L.J.[Lin-Jie], Shi, H.[Humphrey],
Is In-Domain Data Really Needed? A Pilot Study on Cross-Domain Calibration for Network Quantization,
ECV21(3037-3046)
IEEE DOI 2109
Knowledge engineering, Training, Quantization (signal), Ultrasonic imaging, Sensitivity, Satellites, Calibration BibRef

Langroudi, H.F.[Hamed F.], Karia, V.[Vedant], Carmichael, Z.[Zachariah], Zyarah, A.[Abdullah], Pandit, T.[Tej], Gustafson, J.L.[John L.], Kudithipudi, D.[Dhireesha],
Alps: Adaptive Quantization of Deep Neural Networks with GeneraLized PositS,
ECV21(3094-3103)
IEEE DOI 2109
Deep learning, Quantization (signal), Adaptive systems, Upper bound, Numerical analysis, Heuristic algorithms, Pattern recognition BibRef

Abdolrashidi, A.[AmirAli], Wang, L.[Lisa], Agrawal, S.[Shivani], Malmaud, J.[Jonathan], Rybakov, O.[Oleg], Leichner, C.[Chas], Lew, L.[Lukasz],
Pareto-Optimal Quantized ResNet Is Mostly 4-bit,
ECV21(3085-3093)
IEEE DOI 2109
Training, Analytical models, Quantization (signal), Computational modeling, Neural networks, Libraries, Hardware BibRef

Trusov, A.[Anton], Limonova, E.[Elena], Slugin, D.[Dmitry], Nikolaev, D.[Dmitry], Arlazarov, V.V.[Vladimir V.],
Fast Implementation of 4-bit Convolutional Neural Networks for Mobile Devices,
ICPR21(9897-9903)
IEEE DOI 2105
Performance evaluation, Quantization (signal), Neural networks, Time measurement, Real-time systems, convolutional neural networks BibRef

Hacene, G.B.[Ghouthi Boukli], Lassance, C.[Carlos], Gripon, V.[Vincent], Courbariaux, M.[Matthieu], Bengio, Y.[Yoshua],
Attention Based Pruning for Shift Networks,
ICPR21(4054-4061)
IEEE DOI 2105
Deep learning, Training, Quantization (signal), Convolution, Transforms, Complexity theory BibRef

Hou, Z.[Zejiang], Kung, S.Y.[Sun-Yuan],
A Discriminant Information Approach to Deep Neural Network Pruning,
ICPR21(9553-9560)
IEEE DOI 2105
Quantization (signal), Power measurement, Image coding, Neural networks, Tools, Benchmark testing, Pattern recognition BibRef

Marinó, G.C.[Giosuè Cataldo], Ghidoli, G.[Gregorio], Frasca, M.[Marco], Malchiodi, D.[Dario],
Compression strategies and space-conscious representations for deep neural networks,
ICPR21(9835-9842)
IEEE DOI 2105
Quantization (signal), Source coding, Computational modeling, Neural networks, Random access memory, Probabilistic logic, drug-target prediction BibRef

Yuan, Y.[Yong], Chen, C.[Chen], Hu, X.[Xiyuan], Peng, S.[Silong],
Towards Low-Bit Quantization of Deep Neural Networks with Limited Data,
ICPR21(4377-4384)
IEEE DOI 2105
Training, Quantization (signal), Sensitivity, Neural networks, Object detection, Data models, Complexity theory BibRef

Dbouk, H.[Hassan], Sanghvi, H.[Hetul], Mehendale, M.[Mahesh], Shanbhag, N.[Naresh],
DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural Networks,
ECCV20(XXVII:90-106).
Springer DOI 2011
BibRef

do Nascimento, M.G.[Marcelo Gennari], Costain, T.W.[Theo W.], Prisacariu, V.A.[Victor Adrian],
Finding Non-uniform Quantization Schemes Using Multi-task Gaussian Processes,
ECCV20(XVII:383-398).
Springer DOI 2011
BibRef

Neumann, D., Sattler, F., Kirchhoffer, H., Wiedemann, S., Müller, K., Schwarz, H., Wiegand, T., Marpe, D., Samek, W.,
Deepcabac: Plug Play Compression of Neural Network Weights and Weight Updates,
ICIP20(21-25)
IEEE DOI 2011
Artificial neural networks, Quantization (signal), Image coding, Training, Servers, Compression algorithms, Neural Networks, Distributed Training BibRef

Haase, P., Schwarz, H., Kirchhoffer, H., Wiedemann, S., Marinc, T., Marban, A., Müller, K., Samek, W., Marpe, D., Wiegand, T.,
Dependent Scalar Quantization For Neural Network Compression,
ICIP20(36-40)
IEEE DOI 2011
Quantization (signal), Indexes, Neural networks, Context modeling, Entropy coding, Image reconstruction, neural network compression BibRef

Kwon, S.J., Lee, D., Kim, B., Kapoor, P., Park, B., Wei, G.,
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization,
CVPR20(1906-1915)
IEEE DOI 2008
Sparse matrices, Decoding, Quantization (signal), Viterbi algorithm, Bandwidth, Encryption BibRef

Jung, J., Kim, J., Kim, Y., Kim, C.,
Reinforcement Learning-Based Layer-Wise Quantization For Lightweight Deep Neural Networks,
ICIP20(3070-3074)
IEEE DOI 2011
Quantization (signal), Neural networks, Learning (artificial intelligence), Computational modeling, Embedded system BibRef

Geng, X., Lin, J., Li, S.,
Cascaded Mixed-Precision Networks,
ICIP20(241-245)
IEEE DOI 2011
Neural networks, Quantization (signal), Training, Network architecture, Optimization, Image coding, Schedules, Pruning BibRef

Fang, J.[Jun], Shafiee, A.[Ali], Abdel-Aziz, H.[Hamzah], Thorsley, D.[David], Georgiadis, G.[Georgios], Hassoun, J.H.[Joseph H.],
Post-training Piecewise Linear Quantization for Deep Neural Networks,
ECCV20(II:69-86).
Springer DOI 2011
BibRef

Xie, Z.[Zheng], Wen, Z.Q.[Zhi-Quan], Liu, J.[Jing], Liu, Z.Q.[Zhi-Qiang], Wu, X.X.[Xi-Xian], Tan, M.K.[Ming-Kui],
Deep Transferring Quantization,
ECCV20(VIII:625-642).
Springer DOI 2011
BibRef

Wang, Y.[Ying], Lu, Y.D.[Ya-Dong], Blankevoort, T.[Tijmen],
Differentiable Joint Pruning and Quantization for Hardware Efficiency,
ECCV20(XXIX: 259-277).
Springer DOI 2010
BibRef

Cai, Y.H.[Yao-Hui], Yao, Z.W.[Zhe-Wei], Dong, Z.[Zhen], Gholami, A.[Amir], Mahoney, M.W.[Michael W.], Keutzer, K.[Kurt],
ZeroQ: A Novel Zero Shot Quantization Framework,
CVPR20(13166-13175)
IEEE DOI 2008
Quantization (signal), Training, Computational modeling, Sensitivity, Artificial neural networks, Task analysis, Training data BibRef

Qu, Z., Zhou, Z., Cheng, Y., Thiele, L.,
Adaptive Loss-Aware Quantization for Multi-Bit Networks,
CVPR20(7985-7994)
IEEE DOI 2008
Quantization (signal), Optimization, Neural networks, Adaptive systems, Microprocessors, Training, Tensile stress BibRef

Jin, Q., Yang, L., Liao, Z.,
AdaBits: Neural Network Quantization With Adaptive Bit-Widths,
CVPR20(2143-2153)
IEEE DOI 2008
Adaptation models, Quantization (signal), Training, Neural networks, Biological system modeling, Adaptive systems BibRef

Zhu, F.[Feng], Gong, R.H.[Rui-Hao], Yu, F.W.[Feng-Wei], Liu, X.L.[Xiang-Long], Wang, Y.F.[Yan-Fei], Li, Z.L.[Zhe-Long], Yang, X.Q.[Xiu-Qi], Yan, J.J.[Jun-Jie],
Towards Unified INT8 Training for Convolutional Neural Network,
CVPR20(1966-1976)
IEEE DOI 2008
Training, Quantization (signal), Convergence, Acceleration, Computer crashes, Optimization, Task analysis BibRef

Zhuang, B., Liu, L., Tan, M., Shen, C., Reid, I.D.,
Training Quantized Neural Networks With a Full-Precision Auxiliary Module,
CVPR20(1485-1494)
IEEE DOI 2008
Training, Quantization (signal), Object detection, Detectors, Computational modeling, Task analysis, Neural networks BibRef

Yu, H., Wen, T., Cheng, G., Sun, J., Han, Q., Shi, J.,
Low-bit Quantization Needs Good Distribution,
EDLCV20(2909-2918)
IEEE DOI 2008
Quantization (signal), Training, Task analysis, Pipelines, Adaptation models, Computational modeling, Neural networks BibRef

Bhalgat, Y., Lee, J., Nagel, M., Blankevoort, T., Kwak, N.,
LSQ+: Improving low-bit quantization through learnable offsets and better initialization,
EDLCV20(2978-2985)
IEEE DOI 2008
Quantization (signal), Training, Clamps, Neural networks, Artificial intelligence, Minimization BibRef

Pouransari, H., Tu, Z., Tuzel, O.,
Least squares binary quantization of neural networks,
EDLCV20(2986-2996)
IEEE DOI 2008
Quantization (signal), Computational modeling, Optimization, Tensile stress, Neural networks, Computational efficiency, Approximation algorithms BibRef

Gope, D., Beu, J., Thakker, U., Mattina, M.,
Ternary MobileNets via Per-Layer Hybrid Filter Banks,
EDLCV20(3036-3046)
IEEE DOI 2008
Convolution, Quantization (signal), Neural networks, Throughput, Hardware, Computational modeling BibRef

Wang, T., Wang, K., Cai, H., Lin, J., Liu, Z., Wang, H., Lin, Y., Han, S.,
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy,
CVPR20(2075-2084)
IEEE DOI 2008
Quantization (signal), Optimization, Training, Hardware, Pipelines, Biological system modeling, Computer architecture BibRef

Yu, H.B.[Hai-Bao], Han, Q.[Qi], Li, J.B.[Jian-Bo], Shi, J.P.[Jian-Ping], Cheng, G.L.[Guang-Liang], Fan, B.[Bin],
Search What You Want: Barrier Penalty NAS for Mixed Precision Quantization,
ECCV20(IX:1-16).
Springer DOI 2011
BibRef

Marban, A.[Arturo], Becking, D.[Daniel], Wiedemann, S.[Simon], Samek, W.[Wojciech],
Learning Sparse Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T),
EDLCV20(3105-3113)
IEEE DOI 2008
Neural networks, Quantization (signal), Mathematical model, Computational modeling, Compounds, Entropy, Histograms BibRef

Langroudi, H.F.[Hamed F.], Karia, V.[Vedant], Gustafson, J.L.[John L.], Kudithipudi, D.[Dhireesha],
Adaptive Posit: Parameter aware numerical format for deep learning inference on the edge,
EDLCV20(3123-3131)
IEEE DOI 2008
Dynamic range, Neural networks, Quantization (signal), Computational modeling, Machine learning, Adaptation models, Numerical models BibRef

Mordido, G., van Keirsbilck, M., Keller, A.,
Monte Carlo Gradient Quantization,
EDLCV20(3087-3095)
IEEE DOI 2008
Training, Quantization (signal), Monte Carlo methods, Convergence, Neural networks, Heuristic algorithms, Image coding BibRef

Wiedemann, S., Mehari, T., Kepp, K., Samek, W.,
Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training,
EDLCV20(3096-3104)
IEEE DOI 2008
Quantization (signal), Training, Mathematical model, Standards, Neural networks, Convergence, Computational efficiency BibRef

Jiang, W., Wang, W., Liu, S.,
Structured Weight Unification and Encoding for Neural Network Compression and Acceleration,
EDLCV20(3068-3076)
IEEE DOI 2008
Quantization (signal), Computational modeling, Encoding, Image coding, Training, Acceleration, Predictive models BibRef

Yang, H., Gui, S., Zhu, Y., Liu, J.,
Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-Based Approach,
CVPR20(2175-2185)
IEEE DOI 2008
Quantization (signal), Optimization, Computational modeling, Tensile stress, Search problems, Neural networks, Image coding BibRef

Dong, Z., Yao, Z., Gholami, A., Mahoney, M., Keutzer, K.,
HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-Precision,
ICCV19(293-302)
IEEE DOI 2004
image resolution, neural nets, quantisation (signal), neural networks, mixed-precision quantization, deep networks, Image resolution BibRef

Yang, J.[Jiwei], Shen, X.[Xu], Xing, J.[Jun], Tian, X.M.[Xin-Mei], Li, H.Q.A.[Hou-Qi-Ang], Deng, B.[Bing], Huang, J.Q.[Jian-Qiang], Hua, X.S.[Xian-Sheng],
Quantization Networks,
CVPR19(7300-7308).
IEEE DOI 2002
BibRef

Cao, S.J.[Shi-Jie], Ma, L.X.[Ling-Xiao], Xiao, W.C.[Wen-Cong], Zhang, C.[Chen], Liu, Y.X.[Yun-Xin], Zhang, L.T.[Lin-Tao], Nie, L.S.[Lan-Shun], Yang, Z.[Zhi],
SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization,
CVPR19(11208-11217).
IEEE DOI 2002
BibRef

Jung, S.[Sangil], Son, C.Y.[Chang-Yong], Lee, S.[Seohyung], Son, J.[Jinwoo], Han, J.J.[Jae-Joon], Kwak, Y.[Youngjun], Hwang, S.J.[Sung Ju], Choi, C.K.[Chang-Kyu],
Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss,
CVPR19(4345-4354).
IEEE DOI 2002
BibRef

Mitschke, N., Heizmann, M., Noffz, K., Wittmann, R.,
A Fixed-Point Quantization Technique for Convolutional Neural Networks Based on Weight Scaling,
ICIP19(3836-3840)
IEEE DOI 1910
CNNs, Fixed Point Quantization, Image Processing, Machine Vision, Deep Learning BibRef

Ajanthan, T., Dokania, P., Hartley, R., Torr, P.H.S.,
Proximal Mean-Field for Neural Network Quantization,
ICCV19(4870-4879)
IEEE DOI 2004
computational complexity, gradient methods, neural nets, optimisation, stochastic processes, proximal mean-field, Labeling BibRef

Gong, R., Liu, X., Jiang, S., Li, T., Hu, P., Lin, J., Yu, F., Yan, J.,
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks,
ICCV19(4851-4860)
IEEE DOI 2004
backpropagation, convolutional neural nets, data compression, image coding, learning (artificial intelligence), Backpropagation BibRef

Choukroun, Y., Kravchik, E., Yang, F., Kisilev, P.,
Low-bit Quantization of Neural Networks for Efficient Inference,
CEFRL19(3009-3018)
IEEE DOI 2004
inference mechanisms, learning (artificial intelligence), mean square error methods, neural nets, quantisation (signal), MMSE BibRef

Hu, Y., Li, J., Long, X., Hu, S., Zhu, J., Wang, X., Gu, Q.,
Cluster Regularized Quantization for Deep Networks Compression,
ICIP19(914-918)
IEEE DOI 1910
deep neural networks, object classification, model compression, quantization BibRef

Manessi, F., Rozza, A., Bianco, S., Napoletano, P., Schettini, R.,
Automated Pruning for Deep Neural Network Compression,
ICPR18(657-664)
IEEE DOI 1812
Training, Neural networks, Quantization (signal), Task analysis, Feature extraction, Pipelines, Image coding BibRef

Faraone, J., Fraser, N., Blott, M., Leong, P.H.W.,
SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks,
CVPR18(4300-4309)
IEEE DOI 1812
Quantization (signal), Hardware, Symmetric matrices, Training, Complexity theory, Neural networks, Field programmable gate arrays BibRef

Frickenstein, A., Unger, C., Stechele, W.,
Resource-Aware Optimization of DNNs for Embedded Applications,
CRV19(17-24)
IEEE DOI 1908
Optimization, Hardware, Computational modeling, Quantization (signal), Training, Sensitivity, Autonomous vehicles, CNN BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Intrepretation, Explaination, Understanding of Convolutional Neural Networks .


Last update:Mar 16, 2024 at 20:36:19