13.6.3.1 Knowledge Distillation

Chapter Contents (Back)
Knowledge Distillation. Knowledge. Distillation. Knowledge-Based Vision.
See also Transfer Learning from Other Classes.

Chen, G.Z.[Guan-Zhou], Zhang, X.D.[Xiao-Dong], Tan, X.L.[Xiao-Liang], Cheng, Y.F.[Yu-Feng], Dai, F.[Fan], Zhu, K.[Kun], Gong, Y.F.[Yuan-Fu], Wang, Q.[Qing],
Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation,
RS(10), No. 5, 2018, pp. xx-yy.
DOI Link 1806
BibRef

Wu, X.[Xiang], He, R.[Ran], Hu, Y.[Yibo], Sun, Z.N.[Zhe-Nan],
Learning an Evolutionary Embedding via Massive Knowledge Distillation,
IJCV(128), No. 8-9, September 2020, pp. 2089-2106.
Springer DOI 2008
transferring knowledge from a large powerful teacher network to a small compact student one. BibRef

Zaras, A.[Adamantios], Passalis, N.[Nikolaos], Tefas, A.[Anastasios],
Improving knowledge distillation using unified ensembles of specialized teachers,
PRL(146), 2021, pp. 215-221.
Elsevier DOI 2105
68T99, Knowledge distillation, Knowledge transfer, Specialized teachers, Unified ensemble, Unified specialized teachers ensemble BibRef

Bae, J.H.[Ji-Hoon], Yeo, D.[Doyeob], Yim, J.[Junho], Kim, N.S.[Nae-Soo], Pyo, C.S.[Cheol-Sig], Kim, J.[Junmo],
Densely Distilled Flow-Based Knowledge Transfer in Teacher-Student Framework for Image Classification,
IP(29), 2020, pp. 5698-5710.
IEEE DOI 2005
BibRef
Earlier: A2, A1, A5, A3, A4, A6:
Sequential Knowledge Transfer in Teacher-Student Framework Using Densely Distilled Flow-Based Information,
ICIP18(674-678)
IEEE DOI 1809
Knowledge transfer, Training, Computational modeling, Data mining, Optimization, Image classification, Computer architecture, residual network. Training, Data mining, Optimization, Image classification, Knowledge transfer, Computational modeling, Reliability, BibRef

Mazumder, P.[Pratik], Singh, P.[Pravendra], Namboodiri, V.P.[Vinay P.],
GIFSL: Grafting based improved few-shot learning,
IVC(104), 2020, pp. 104006.
Elsevier DOI 2012
Few-shot learning, Grafting, Self-supervision, Distillation, Deep learning, Object recognition BibRef

Li, X.W.[Xue-Wei], Li, S.Y.[Song-Yuan], Omar, B.[Bourahla], Wu, F.[Fei], Li, X.[Xi],
ResKD: Residual-Guided Knowledge Distillation,
IP(30), 2021, pp. 4735-4746.
IEEE DOI 2105
BibRef

Nguyen-Meidine, L.T.[Le Thanh], Belal, A.[Atif], Kiran, M.[Madhu], Dolz, J.[Jose], Blais-Morin, L.A.[Louis-Antoine], Granger, E.[Eric],
Knowledge distillation methods for efficient unsupervised adaptation across multiple domains,
IVC(108), 2021, pp. 104096.
Elsevier DOI 2104
BibRef
And:
Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation,
WACV21(1338-1346)
IEEE DOI 2106
Deep learning, Convolutional NNs, Knowledge distillation, Unsupervised domain adaptation, CNN acceleration and compression. Adaptation models, Computational modeling, Benchmark testing, Real-time systems BibRef

Zhang, H.R.[Hao-Ran], Hu, Z.Z.[Zhen-Zhen], Qin, W.[Wei], Xu, M.L.[Ming-Liang], Wang, M.[Meng],
Adversarial co-distillation learning for image recognition,
PR(111), 2021, pp. 107659.
Elsevier DOI 2012
Knowledge distillation, Data augmentation, Generative adversarial nets, Divergent examples, Image classification BibRef

Gou, J.P.[Jian-Ping], Yu, B.S.[Bao-Sheng], Maybank, S.J.[Stephen J.], Tao, D.C.[Da-Cheng],
Knowledge Distillation: A Survey,
IJCV(129), No. 6, June 2021, pp. 1789-1819.
Springer DOI 2106
Survey, Knowledge Distillation. BibRef

Deng, Y.J.[Yong-Jian], Chen, H.[Hao], Chen, H.Y.[Hui-Ying], Li, Y.F.[You-Fu],
Learning From Images: A Distillation Learning Framework for Event Cameras,
IP(30), 2021, pp. 4919-4931.
IEEE DOI 2106
Task analysis, Feature extraction, Cameras, Data models, Streaming media, Trajectory, Power demand, Event-based vision, optical flow prediction BibRef

Liu, Y.[Yang], Wang, K.[Keze], Li, G.B.[Guan-Bin], Lin, L.[Liang],
Semantics-Aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition,
IP(30), 2021, pp. 5573-5588.
IEEE DOI 2106
Videos, Knowledge engineering, Wearable sensors, Adaptation models, Sensors, Semantics, Image synthesis, Action recognition, transfer learning BibRef

Feng, Z.X.[Zhan-Xiang], Lai, J.H.[Jian-Huang], Xie, X.H.[Xiao-Hua],
Resolution-Aware Knowledge Distillation for Efficient Inference,
IP(30), 2021, pp. 6985-6996.
IEEE DOI 2108
Knowledge engineering, Feature extraction, Image resolution, Computational modeling, Computational complexity, Image coding, adversarial learning BibRef

Liu, Y.[Yuyang], Cong, Y.[Yang], Sun, G.[Gan], Zhang, T.[Tao], Dong, J.[Jiahua], Liu, H.[Hongsen],
L3DOC: Lifelong 3D Object Classification,
IP(30), 2021, pp. 7486-7498.
IEEE DOI 2109
Task analysis, Solid modeling, Data models, Knowledge engineering, Shape, Robots, task-relevant knowledge distillation BibRef

Bhardwaj, A.[Ayush], Pimpale, S.[Sakshee], Kumar, S.[Saurabh], Banerjee, B.[Biplab],
Empowering Knowledge Distillation via Open Set Recognition for Robust 3D Point Cloud Classification,
PRL(151), 2021, pp. 172-179.
Elsevier DOI 2110
Knowledge Distillation, Open Set Recognition, 3D Object Recognition, Point Cloud Classification BibRef

Shao, B.[Baitan], Chen, Y.[Ying],
Multi-granularity for knowledge distillation,
IVC(115), 2021, pp. 104286.
Elsevier DOI 2110
Knowledge distillation, Model compression, Multi-granularity distillation mechanism, Stable excitation scheme BibRef


Haselhoff, A.[Anselm], Kronenberger, J.[Jan], Küppers, F.[Fabian], Schneider, J.[Jonas],
Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation,
SAIAD21(21-28)
IEEE DOI 2109
Visualization, Shape, Semantics, Training data, Object detection, Predictive models, Linear programming BibRef

Yang, L.[Lehan], Xu, K.[Kele],
Cross Modality Knowledge Distillation for Multi-modal Aerial View Object Classification,
NTIRE21(382-387)
IEEE DOI 2109
Training, Speckle, Feature extraction, Radar polarimetry, Data models, Robustness, Pattern recognition BibRef

Bhat, P.[Prashant], Arani, E.[Elahe], Zonooz, B.[Bahram],
Distill on the Go: Online knowledge distillation in self-supervised learning,
LLID21(2672-2681)
IEEE DOI 2109
Annotations, Computer architecture, Performance gain, Benchmark testing, Pattern recognition BibRef

Okuno, T.[Tomoyuki], Nakata, Y.[Yohei], Ishii, Y.[Yasunori], Tsukizawa, S.[Sotaro],
Lossless AI: Toward Guaranteeing Consistency between Inferences Before and After Quantization via Knowledge Distillation,
MVA21(1-5)
DOI Link 2109
Training, Quality assurance, Quantization (signal), Object detection, Network architecture, Real-time systems BibRef

Nayak, G.K.[Gaurav Kumar], Mopuri, K.R.[Konda Reddy], Chakraborty, A.[Anirban],
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation,
WACV21(1429-1437)
IEEE DOI 2106
Training, Visualization, Sensitivity, Computational modeling, Semantics, Neural networks, Training data BibRef

Lee, J.[Jongmin], Jeong, Y.[Yoonwoo], Kim, S.[Seungwook], Min, J.[Juhong], Cho, M.[Minsu],
Learning to Distill Convolutional Features into Compact Local Descriptors,
WACV21(897-907)
IEEE DOI 2106
Location awareness, Visualization, Image matching, Semantics, Benchmark testing, Feature extraction, Robustness BibRef

Arani, E.[Elahe], Sarfraz, F.[Fahad], Zonooz, B.[Bahram],
Noise as a Resource for Learning in Knowledge Distillation,
WACV21(3128-3137)
IEEE DOI 2106
Training, Uncertainty, Neuroscience, Collaboration, Collaborative work, Brain modeling, Probabilistic logic BibRef

Chawla, A.[Akshay], Yin, H.X.[Hong-Xu], Molchanov, P.[Pavlo], Alvarez, J.[Jose],
Data-free Knowledge Distillation for Object Detection,
WACV21(3288-3297)
IEEE DOI 2106
Knowledge engineering, Training, Image synthesis, Neural networks, Object detection BibRef

Kothandaraman, D.[Divya], Nambiar, A.[Athira], Mittal, A.[Anurag],
Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation,
WACVW21(134-143) Autonomous Vehicle Vision
IEEE DOI 2105
Knowledge engineering, Adaptation models, Image segmentation, Semantics, Memory management BibRef

Kushawaha, R.K.[Ravi Kumar], Kumar, S.[Saurabh], Banerjee, B.[Biplab], Velmurugan, R.[Rajbabu],
Distilling Spikes: Knowledge Distillation in Spiking Neural Networks,
ICPR21(4536-4543)
IEEE DOI 2105
Knowledge engineering, Training, Image coding, Computational modeling, Artificial neural networks, Hardware BibRef

Sarfraz, F.[Fahad], Arani, E.[Elahe], Zonooz, B.[Bahram],
Knowledge Distillation Beyond Model Compression,
ICPR21(6136-6143)
IEEE DOI 2105
Training, Knowledge engineering, Neural networks, Network architecture, Collaborative work, Robustness BibRef

Ahmed, W.[Waqar], Zunino, A.[Andrea], Morerio, P.[Pietro], Murino, V.[Vittorio],
Compact CNN Structure Learning by Knowledge Distillation,
ICPR21(6554-6561)
IEEE DOI 2105
Training, Learning systems, Knowledge engineering, Network architecture, Predictive models BibRef

Ma, J.X.[Jia-Xin], Yonetani, R.[Ryo], Iqbal, Z.[Zahid],
Adaptive Distillation for Decentralized Learning from Heterogeneous Clients,
ICPR21(7486-7492)
IEEE DOI 2105
Learning systems, Adaptation models, Visualization, Biomedical equipment, Medical services, Collaborative work, Data models BibRef

Xu, Y.[Yi], Pu, J.[Jian], Zhao, H.[Hui],
Knowledge Distillation with a Precise Teacher and Prediction with Abstention,
ICPR21(9000-9006)
IEEE DOI 2105
Knowledge engineering, Supervised learning, Benchmark testing, Predictive models BibRef

Tsunashima, H.[Hideki], Kataoka, H.[Hirokatsu], Yamato, J.[Junji], Chen, Q.[Qiu], Morishima, S.[Shigeo],
Adversarial Knowledge Distillation for a Compact Generator,
ICPR21(10636-10643)
IEEE DOI 2105
Training, Image resolution, MIMICs, Generators BibRef

Zhang, Z.X.[Zhe-Xi], Zhu, W.[Wei], Yan, J.C.[Jun-Chi], Gao, P.[Peng], Xie, G.T.[Guo-Tong],
Automatic Student Network Search for Knowledge Distillation,
ICPR21(2446-2453)
IEEE DOI 2105
Knowledge engineering, Performance evaluation, Computational modeling, Bit error rate, Neural networks, Natural language processing BibRef

Kim, J.H.[Jang-Ho], Hyun, M.S.[Min-Sung], Chung, I.[Inseop], Kwak, N.[Nojun],
Feature Fusion for Online Mutual Knowledge Distillation,
ICPR21(4619-4625)
IEEE DOI 2105
Neural networks, Education, Performance gain, Pattern recognition BibRef

Mitsuno, K.[Kakeru], Nomura, Y.[Yuichiro], Kurita, T.[Takio],
Channel Planting for Deep Neural Networks using Knowledge Distillation,
ICPR21(7573-7579)
IEEE DOI 2105
Training, Knowledge engineering, Heuristic algorithms, Neural networks, Computer architecture, Network architecture BibRef

Finogeev, E., Gorbatsevich, V., Moiseenko, A., Vizilter, Y., Vygolov, O.,
Knowledge Distillation Using GANs for Fast Object Detection,
ISPRS20(B2:583-588).
DOI Link 2012
BibRef

Sadhukhan, R., Saha, A., Mukhopadhyay, J., Patra, A.,
Knowledge Distillation Inspired Fine-Tuning Of Tucker Decomposed CNNS and Adversarial Robustness Analysis,
ICIP20(1876-1880)
IEEE DOI 2011
Robustness, Knowledge engineering, Convolution, Tensile stress, Neural networks, Perturbation methods, Acceleration, Adversarial Robustness BibRef

Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J.,
Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation,
ICIP20(648-652)
IEEE DOI 2011
Perturbation methods, Task analysis, Training, Computational modeling, Approximation algorithms, black-box models BibRef

Xu, K.R.[Kun-Ran], Rui, L.[Lai], Li, Y.S.[Yi-Shi], Gu, L.[Lin],
Feature Normalized Knowledge Distillation for Image Classification,
ECCV20(XXV:664-680).
Springer DOI 2011
BibRef

Yang, Y., Qiu, J., Song, M., Tao, D., Wang, X.,
Distilling Knowledge From Graph Convolutional Networks,
CVPR20(7072-7081)
IEEE DOI 2008
Knowledge engineering, Task analysis, Computational modeling, Computer science, Training, Neural networks BibRef

Yun, J.S.[Ju-Seung], Kim, B.[Byungjoo], Kim, J.[Junmo],
Weight Decay Scheduling and Knowledge Distillation for Active Learning,
ECCV20(XXVI:431-447).
Springer DOI 2011
BibRef

Li, C., Peng, J., Yuan, L., Wang, G., Liang, X., Lin, L., Chang, X.,
Block-Wisely Supervised Neural Architecture Search With Knowledge Distillation,
CVPR20(1986-1995)
IEEE DOI 2008
Computer architecture, Network architecture, Knowledge engineering, Training, DNA, Convergence, Feature extraction BibRef

Wei, L.H.[Long-Hui], Xiao, A.[An], Xie, L.X.[Ling-Xi], Zhang, X.P.[Xiao-Peng], Chen, X.[Xin], Tian, Q.[Qi],
Circumventing Outliers of Autoaugment with Knowledge Distillation,
ECCV20(III:608-625).
Springer DOI 2012
BibRef

Walawalkar, D.[Devesh], Shen, Z.Q.[Zhi-Qiang], Savvides, M.[Marios],
Online Ensemble Model Compression Using Knowledge Distillation,
ECCV20(XIX:18-35).
Springer DOI 2011
BibRef

Xiang, L.Y.[Liu-Yu], Ding, G.G.[Gui-Guang], Han, J.G.[Jun-Gong],
Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification,
ECCV20(V:247-263).
Springer DOI 2011
BibRef

Zhou, B.[Brady], Kalra, N.[Nimit], Krähenbühl, P.[Philipp],
Domain Adaptation Through Task Distillation,
ECCV20(XXVI:664-680).
Springer DOI 2011
BibRef

Li, Z.[Zheng], Huang, Y.[Ying], Chen, D.F.[De-Fang], Luo, T.[Tianren], Cai, N.[Ning], Pan, Z.G.[Zhi-Geng],
Online Knowledge Distillation via Multi-branch Diversity Enhancement,
ACCV20(IV:318-333).
Springer DOI 2103
BibRef

Ye, H.J.[Han-Jia], Lu, S.[Su], Zhan, D.C.[De-Chuan],
Distilling Cross-Task Knowledge via Relationship Matching,
CVPR20(12393-12402)
IEEE DOI 2008
Task analysis, Neural networks, Training, Knowledge engineering, Predictive models, Stochastic processes, Temperature measurement BibRef

Yao, A.B.[An-Bang], Sun, D.[Dawei],
Knowledge Transfer via Dense Cross-layer Mutual-distillation,
ECCV20(XV:294-311).
Springer DOI 2011
BibRef

Yue, K.Y.[Kai-Yu], Deng, J.F.[Jiang-Fan], Zhou, F.[Feng],
Matching Guided Distillation,
ECCV20(XV:312-328).
Springer DOI 2011
BibRef

Zhang, Y.C.[You-Cai], Lan, Z.H.[Zhong-Hao], Dai, Y.C.[Yu-Chen], Zeng, F.G.[Fan-Gao], Bai, Y.[Yan], Chang, J.[Jie], Wei, Y.C.[Yi-Chen],
Prime-aware Adaptive Distillation,
ECCV20(XIX:658-674).
Springer DOI 2011
Student-Teacher learning. BibRef

Xu, G.D.[Guo-Dong], Liu, Z.W.[Zi-Wei], Li, X.X.[Xiao-Xiao], Loy, C.C.[Chen Change],
Knowledge Distillation Meets Self-Supervision,
ECCV20(IX:588-604).
Springer DOI 2011
Extracting the dark knowledge from a teacher network to guide the learning of a student network, for transfer learning. BibRef

Li, X.J.[Xiao-Jie], Wu, J.L.[Jian-Long], Fang, H.Y.[Hong-Yu], Liao, Y.[Yue], Wang, F.[Fei], Qian, C.[Chen],
Local Correlation Consistency for Knowledge Distillation,
ECCV20(XII: 18-33).
Springer DOI 2010
Knowledge extraction from the teacher network plays a critical role in the knowledge distillation task to improve the performance of the student network. BibRef

Passalis, N.[Nikolaos], Tzelepi, M.[Maria], Tefas, A.[Anastasios],
Heterogeneous Knowledge Distillation Using Information Flow Modeling,
CVPR20(2336-2345)
IEEE DOI 2008
From complex teacher to smaller student. Training, Neural networks, Knowledge engineering, Data models, Convergence, Data mining, Transforms BibRef

Chen, Z.L.[Zai-Liang], Zheng, X.X.[Xian-Xian], Shen, H.L.[Hai-Lan], Zeng, Z.Y.[Zi-Yang], Zhou, Y.K.[Yu-Kun], Zhao, R.C.[Rong-Chang],
Improving Knowledge Distillation via Category Structure,
ECCV20(XXVIII:205-219).
Springer DOI 2011
Training student to mimic the teacher, but not capture the structure. BibRef

Wang, D.Y.[De-Yu], Wen, D.[Dongchao], Liu, J.J.[Jun-Jie], Tao, W.[Wei], Chen, T.W.[Tse-Wei], Osa, K.[Kinya], Kato, M.[Masami],
Fully Supervised and Guided Distillation for One-stage Detectors,
ACCV20(III:171-188).
Springer DOI 2103
BibRef

Itsumi, H., Beye, F., Shinohara, Y., Iwai, T.,
Training With Cache: Specializing Object Detectors From Live Streams Without Overfitting,
ICIP20(1976-1980)
IEEE DOI 2011
Training, Data models, Solid modeling, Adaptation models, Training data, Streaming media, Legged locomotion, Online training, Knowledge distillation BibRef

Liu, B.L.[Ben-Lin], Rao, Y.M.[Yong-Ming], Lu, J.W.[Ji-Wen], Zhou, J.[Jie], Hsieh, C.J.[Cho-Jui],
Metadistiller: Network Self-boosting via Meta-learned Top-down Distillation,
ECCV20(XIV:694-709).
Springer DOI 2011
BibRef

Choi, Y., Choi, J., El-Khamy, M., Lee, J.,
Data-Free Network Quantization With Adversarial Knowledge Distillation,
EDLCV20(3047-3057)
IEEE DOI 2008
Generators, Quantization (signal), Training, Computational modeling, Data models, Machine learning, Data privacy BibRef

de Vieilleville, F., Lagrange, A., Ruiloba, R., May, S.,
Towards Distillation of Deep Neural Networks for Satellite On-board Image Segmentation,
ISPRS20(B2:1553-1559).
DOI Link 2012
BibRef

Wang, X.B.[Xiao-Bo], Fu, T.Y.[Tian-Yu], Liao, S.C.[Sheng-Cai], Wang, S.[Shuo], Lei, Z.[Zhen], Mei, T.[Tao],
Exclusivity-Consistency Regularized Knowledge Distillation for Face Recognition,
ECCV20(XXIV:325-342).
Springer DOI 2012
BibRef

Guan, Y.S.[Yu-Shuo], Zhao, P.Y.[Peng-Yu], Wang, B.X.[Bing-Xuan], Zhang, Y.X.[Yuan-Xing], Yao, C.[Cong], Bian, K.G.[Kai-Gui], Tang, J.[Jian],
Differentiable Feature Aggregation Search for Knowledge Distillation,
ECCV20(XVII:469-484).
Springer DOI 2011
BibRef

Gu, J.D.[Jin-Dong], Wu, Z.L.[Zhi-Liang], Tresp, V.[Volker],
Introspective Learning by Distilling Knowledge from Online Self-explanation,
ACCV20(IV:36-52).
Springer DOI 2103
BibRef

Guo, Q.S.[Qiu-Shan], Wang, X.J.[Xin-Jiang], Wu, Y.C.[Yi-Chao], Yu, Z.P.[Zhi-Peng], Liang, D.[Ding], Hu, X.L.[Xiao-Lin], Luo, P.[Ping],
Online Knowledge Distillation via Collaborative Learning,
CVPR20(11017-11026)
IEEE DOI 2008
Knowledge engineering, Training, Collaborative work, Perturbation methods, Collaboration, Neural networks, Logic gates BibRef

Li, T., Li, J., Liu, Z., Zhang, C.,
Few Sample Knowledge Distillation for Efficient Network Compression,
CVPR20(14627-14635)
IEEE DOI 2008
Training, Tensile stress, Knowledge engineering, Convolution, Neural networks, Computational modeling, Standards BibRef

Wang, D., Li, Y., Wang, L., Gong, B.,
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation From a Blackbox Model,
CVPR20(1495-1504)
IEEE DOI 2008
Neural networks, Computational modeling, Data models, Training, Knowledge engineering, Visualization, Manifolds BibRef

Farhadi, M.[Mohammad], Yang, Y.Z.[Ye-Zhou],
TKD: Temporal Knowledge Distillation for Active Perception,
WACV20(942-951)
IEEE DOI 2006
Code, Object Detection.
WWW Link. Temporal knowledge over NN applied over multiple frames. Adaptation models, Object detection, Visualization, Computational modeling, Task analysis, Training, Feature extraction BibRef

Seddik, M.E.A., Essafi, H., Benzine, A., Tamaazousti, M.,
Lightweight Neural Networks From PCA LDA Based Distilled Dense Neural Networks,
ICIP20(3060-3064)
IEEE DOI 2011
Neural networks, Principal component analysis, Computational modeling, Training, Machine learning, Lightweight Networks BibRef

Tung, F.[Fred], Mori, G.[Greg],
Similarity-Preserving Knowledge Distillation,
ICCV19(1365-1374)
IEEE DOI 2004
learning (artificial intelligence), neural nets, semantic networks, Task analysis BibRef

Zhang, M.Y.[Man-Yuan], Song, G.L.[Guang-Lu], Zhou, H.[Hang], Liu, Y.[Yu],
Discriminability Distillation in Group Representation Learning,
ECCV20(X:1-19).
Springer DOI 2011
BibRef

Jin, X.[Xiao], Peng, B.Y.[Bao-Yun], Wu, Y.C.[Yi-Chao], Liu, Y.[Yu], Liu, J.H.[Jia-Heng], Liang, D.[Ding], Yan, J.J.[Jun-Jie], Hu, X.L.[Xiao-Lin],
Knowledge Distillation via Route Constrained Optimization,
ICCV19(1345-1354)
IEEE DOI 2004
face recognition, image classification, learning (artificial intelligence), neural nets, optimisation, Neural networks BibRef

Mullapudi, R.T., Chen, S., Zhang, K., Ramanan, D., Fatahalian, K.,
Online Model Distillation for Efficient Video Inference,
ICCV19(3572-3581)
IEEE DOI 2004
convolutional neural nets, image segmentation, inference mechanisms, learning (artificial intelligence), Cameras BibRef

Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., Ma, K.,
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation,
ICCV19(3712-3721)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), knowledge distillation, student neural networks, Computational modeling BibRef

Cho, J.H., Hariharan, B.,
On the Efficacy of Knowledge Distillation,
ICCV19(4793-4801)
IEEE DOI 2004
learning (artificial intelligence), neural nets, Probability distribution, teacher architectures, knowledge distillation performance. BibRef

Peng, B., Jin, X., Li, D., Zhou, S., Wu, Y., Liu, J., Zhang, Z., Liu, Y.,
Correlation Congruence for Knowledge Distillation,
ICCV19(5006-5015)
IEEE DOI 2004
correlation methods, face recognition, image classification, learning (artificial intelligence), instance-level information, Knowledge transfer BibRef

Vongkulbhisal, J.[Jayakorn], Vinayavekhin, P.[Phongtharin], Visentini-Scarzanella, M.[Marco],
Unifying Heterogeneous Classifiers With Distillation,
CVPR19(3170-3179).
IEEE DOI 2002
BibRef

Yan, M., Zhao, M., Xu, Z., Zhang, Q., Wang, G., Su, Z.,
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition,
LFR19(2647-2654)
IEEE DOI 2004
Code, Face Recognition.
WWW Link. convolutional neural nets, face recognition, learning (artificial intelligence), student model, teacher model, knowledge distillation BibRef

Yoshioka, K., Lee, E., Wong, S., Horowitz, M.,
Dataset Culling: Towards Efficient Training of Distillation-Based Domain Specific Models,
ICIP19(3237-3241)
IEEE DOI 1910
Object Detection, Training Efficiency, Distillation, Dataset Culling, Deep Learning BibRef

Yang, C.L.[Cheng-Lin], Xie, L.X.[Ling-Xi], Su, C.[Chi], Yuille, A.L.[Alan L.],
Snapshot Distillation: Teacher-Student Optimization in One Generation,
CVPR19(2854-2863).
IEEE DOI 2002
BibRef

Kundu, J.N., Lakkakula, N., Radhakrishnan, V.B.,
UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation,
ICCV19(1436-1445)
IEEE DOI 2004
generalisation (artificial intelligence), image classification, object detection, unsupervised learning, task-transferability, Adaptation models BibRef

Park, W.[Wonpyo], Kim, D.J.[Dong-Ju], Lu, Y.[Yan], Cho, M.[Minsu],
Relational Knowledge Distillation,
CVPR19(3962-3971).
IEEE DOI 2002
BibRef

Liu, Y.[Yufan], Cao, J.J.[Jia-Jiong], Li, B.[Bing], Yuan, C.F.[Chun-Feng], Hu, W.M.[Wei-Ming], Li, Y.X.[Yang-Xi], Duan, Y.Q.[Yun-Qiang],
Knowledge Distillation via Instance Relationship Graph,
CVPR19(7089-7097).
IEEE DOI 2002
BibRef

Ahn, S.S.[Sung-Soo], Hu, S.X.[Shell Xu], Damianou, A.[Andreas], Lawrence, N.D.[Neil D.], Dai, Z.W.[Zhen-Wen],
Variational Information Distillation for Knowledge Transfer,
CVPR19(9155-9163).
IEEE DOI 2002
BibRef

Minami, S.[Soma], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu],
Gradual Sampling Gate for Bidirectional Knowledge Distillation,
MVA19(1-6)
DOI Link 1911
Transfer knowledge from large pre-trained network to smaller one. data compression, learning (artificial intelligence), neural nets, gradual sampling gate, Power markets BibRef

Chen, W.C.[Wei-Chun], Chang, C.C.[Chia-Che], Lee, C.R.[Che-Rung],
Knowledge Distillation with Feature Maps for Image Classification,
ACCV18(III:200-215).
Springer DOI 1906
BibRef

Hou, S.H.[Sai-Hui], Pan, X.Y.[Xin-Yu], Loy, C.C.[Chen Change], Wang, Z.L.[Zi-Lei], Lin, D.H.[Da-Hua],
Lifelong Learning via Progressive Distillation and Retrospection,
ECCV18(III: 452-467).
Springer DOI 1810
BibRef

Pintea, S.L.[Silvia L.], Liu, Y.[Yue], van Gemert, J.C.[Jan C.],
Recurrent Knowledge Distillation,
ICIP18(3393-3397)
IEEE DOI 1809
small network learns from larger network. Computational modeling, Memory management, Training, Color, Convolution, Road transportation, Knowledge distillation, recurrent layers BibRef

Lee, S.H.[Seung Hyun], Kim, D.H.[Dae Ha], Song, B.C.[Byung Cheol],
Self-supervised Knowledge Distillation Using Singular Value Decomposition,
ECCV18(VI: 339-354).
Springer DOI 1810
BibRef

Yim, J., Joo, D., Bae, J., Kim, J.,
A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning,
CVPR17(7130-7138)
IEEE DOI 1711
Feature extraction, Knowledge engineering, Knowledge transfer, Optimization, Training BibRef

Gupta, S.[Saurabh], Hoffman, J.[Judy], Malik, J.[Jitendra],
Cross Modal Distillation for Supervision Transfer,
CVPR16(2827-2836)
IEEE DOI 1612
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Explainable Aritficial Intelligence .


Last update:Oct 24, 2021 at 16:35:58