13.6.3.1 Knowledge Distillation

Chapter Contents (Back)
Knowledge Distillation. Knowledge. Distillation. Knowledge-Based Vision.
See also Student-Teacher, Teacher-Student, Knowledge Distillation.
See also Transfer Learning from Other Tasks, Other Classes.

Chen, G.Z.[Guan-Zhou], Zhang, X.D.[Xiao-Dong], Tan, X.L.[Xiao-Liang], Cheng, Y.F.[Yu-Feng], Dai, F.[Fan], Zhu, K.[Kun], Gong, Y.F.[Yuan-Fu], Wang, Q.[Qing],
Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation,
RS(10), No. 5, 2018, pp. xx-yy.
DOI Link 1806
BibRef

Mazumder, P.[Pratik], Singh, P.[Pravendra], Namboodiri, V.P.[Vinay P.],
GIFSL: Grafting based improved few-shot learning,
IVC(104), 2020, pp. 104006.
Elsevier DOI 2012
Few-shot learning, Grafting, Self-supervision, Distillation, Deep learning, Object recognition BibRef

Li, X.W.[Xue-Wei], Li, S.Y.[Song-Yuan], Omar, B.[Bourahla], Wu, F.[Fei], Li, X.[Xi],
ResKD: Residual-Guided Knowledge Distillation,
IP(30), 2021, pp. 4735-4746.
IEEE DOI 2105
BibRef

Nguyen-Meidine, L.T.[Le Thanh], Belal, A.[Atif], Kiran, M.[Madhu], Dolz, J.[Jose], Blais-Morin, L.A.[Louis-Antoine], Granger, E.[Eric],
Knowledge distillation methods for efficient unsupervised adaptation across multiple domains,
IVC(108), 2021, pp. 104096.
Elsevier DOI 2104
BibRef
And:
Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation,
WACV21(1338-1346)
IEEE DOI 2106
Deep learning, Convolutional NNs, Knowledge distillation, Unsupervised domain adaptation, CNN acceleration and compression. Adaptation models, Computational modeling, Benchmark testing, Real-time systems BibRef

Remigereau, F.[Félix], Mekhazni, D.[Djebril], Abdoli, S.[Sajjad], Nguyen-Meidine, L.T.[Le Thanh], Cruz, R.M.O.[Rafael M. O.], Granger, E.[Eric],
Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time Person Re-Identification,
ICIP22(3853-3557)
IEEE DOI 2211
Training, Adaptation models, Scalability, Streaming media, Video surveillance, IEEE Standards, Video Surveillance, Knowledge Distillation BibRef

Zhang, H.R.[Hao-Ran], Hu, Z.Z.[Zhen-Zhen], Qin, W.[Wei], Xu, M.L.[Ming-Liang], Wang, M.[Meng],
Adversarial co-distillation learning for image recognition,
PR(111), 2021, pp. 107659.
Elsevier DOI 2012
Knowledge distillation, Data augmentation, Generative adversarial nets, Divergent examples, Image classification BibRef

Gou, J.P.[Jian-Ping], Yu, B.S.[Bao-Sheng], Maybank, S.J.[Stephen J.], Tao, D.C.[Da-Cheng],
Knowledge Distillation: A Survey,
IJCV(129), No. 6, June 2021, pp. 1789-1819.
Springer DOI 2106
Survey, Knowledge Distillation. BibRef

Liu, Y.F.[Yu-Fan], Cao, J.J.[Jia-Jiong], Li, B.[Bing], Hu, W.M.[Wei-Ming], Ding, J.T.[Jing-Ting], Li, L.[Liang], Maybank, S.J.[Stephen J.],
Cross-Architecture Knowledge Distillation,
IJCV(132), No. 8, August 2024, pp. 2798-2824.
Springer DOI 2408
BibRef

Tian, X.D.[Xu-Dong], Zhang, Z.Z.[Zhi-Zhong], Wang, C.[Cong], Zhang, W.S.[Wen-Sheng], Qu, Y.Y.[Yan-Yun], Ma, L.Z.[Li-Zhuang], Wu, Z.Z.[Zong-Ze], Xie, Y.[Yuan], Tao, D.C.[Da-Cheng],
Variational Distillation for Multi-View Learning,
PAMI(46), No. 7, July 2024, pp. 4551-4566.
IEEE DOI 2406
Mutual information, Task analysis, Representation learning, Predictive models, Optimization, Visualization, Pattern analysis, knowledge distillation BibRef

Deng, Y.J.[Yong-Jian], Chen, H.[Hao], Chen, H.Y.[Hui-Ying], Li, Y.F.[You-Fu],
Learning From Images: A Distillation Learning Framework for Event Cameras,
IP(30), 2021, pp. 4919-4931.
IEEE DOI 2106
Task analysis, Feature extraction, Cameras, Data models, Streaming media, Trajectory, Power demand, Event-based vision, optical flow prediction BibRef

Liu, Y.[Yang], Wang, K.[Keze], Li, G.B.[Guan-Bin], Lin, L.[Liang],
Semantics-Aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition,
IP(30), 2021, pp. 5573-5588.
IEEE DOI 2106
Videos, Knowledge engineering, Wearable sensors, Adaptation models, Sensors, Semantics, Image synthesis, Action recognition, transfer learning BibRef

Feng, Z.X.[Zhan-Xiang], Lai, J.H.[Jian-Huang], Xie, X.H.[Xiao-Hua],
Resolution-Aware Knowledge Distillation for Efficient Inference,
IP(30), 2021, pp. 6985-6996.
IEEE DOI 2108
Knowledge engineering, Feature extraction, Image resolution, Computational modeling, Computational complexity, Image coding, adversarial learning BibRef

Liu, Y.Y.[Yu-Yang], Cong, Y.[Yang], Sun, G.[Gan], Zhang, T.[Tao], Dong, J.H.[Jia-Hua], Liu, H.S.[Hong-Sen],
L3DOC: Lifelong 3D Object Classification,
IP(30), 2021, pp. 7486-7498.
IEEE DOI 2109
Task analysis, Solid modeling, Data models, Knowledge engineering, Shape, Robots, task-relevant knowledge distillation BibRef

Bhardwaj, A.[Ayush], Pimpale, S.[Sakshee], Kumar, S.[Saurabh], Banerjee, B.[Biplab],
Empowering Knowledge Distillation via Open Set Recognition for Robust 3D Point Cloud Classification,
PRL(151), 2021, pp. 172-179.
Elsevier DOI 2110
Knowledge Distillation, Open Set Recognition, 3D Object Recognition, Point Cloud Classification BibRef

Shao, B.[Baitan], Chen, Y.[Ying],
Multi-granularity for knowledge distillation,
IVC(115), 2021, pp. 104286.
Elsevier DOI 2110
Knowledge distillation, Model compression, Multi-granularity distillation mechanism, Stable excitation scheme BibRef

Zhang, L.[Libo], Du, D.W.[Da-Wei], Li, C.C.[Cong-Cong], Wu, Y.J.[Yan-Jun], Luo, T.J.[Tie-Jian],
Iterative Knowledge Distillation for Automatic Check-Out,
MultMed(23), 2021, pp. 4158-4170.
IEEE DOI 2112
Testing, Training, Adaptation models, Reliability, Feature extraction, Training data, Task analysis, iterative knowledge distillation BibRef

Qin, D.[Dian], Bu, J.J.[Jia-Jun], Liu, Z.[Zhe], Shen, X.[Xin], Zhou, S.[Sheng], Gu, J.J.[Jing-Jun], Wang, Z.H.[Zhi-Hua], Wu, L.[Lei], Dai, H.F.[Hui-Fen],
Efficient Medical Image Segmentation Based on Knowledge Distillation,
MedImg(40), No. 12, December 2021, pp. 3820-3831.
IEEE DOI 2112
Image segmentation, Biomedical imaging, Semantics, Knowledge engineering, Feature extraction, Tumors, transfer learning BibRef

Tian, L.[Ling], Wang, Z.C.[Zhi-Chao], He, B.[Bokun], He, C.[Chu], Wang, D.W.[Ding-Wen], Li, D.[Deshi],
Knowledge Distillation of Grassmann Manifold Network for Remote Sensing Scene Classification,
RS(13), No. 22, 2021, pp. xx-yy.
DOI Link 2112
BibRef

Yue, J.[Jun], Fang, L.Y.[Le-Yuan], Rahmani, H.[Hossein], Ghamisi, P.[Pedram],
Self-Supervised Learning With Adaptive Distillation for Hyperspectral Image Classification,
GeoRS(60), 2022, pp. 1-13.
IEEE DOI 2112
Feature extraction, Training, Adaptive systems, Mirrors, Knowledge engineering, Hyperspectral imaging, Spectral analysis, spatial-spectral feature extraction BibRef

Chen, J.Z.[Jing-Zhou], Wang, S.H.[Shi-Hao], Chen, L.[Ling], Cai, H.B.[Hai-Bin], Qian, Y.T.[Yun-Tao],
Incremental Detection of Remote Sensing Objects With Feature Pyramid and Knowledge Distillation,
GeoRS(60), 2022, pp. 1-13.
IEEE DOI 2112
Feature extraction, Remote sensing, Training, Object detection, Adaptation models, Proposals, Detectors, Deep learning, remote sensing BibRef

Chen, H.Y.[Hong-Yuan], Pei, Y.T.[Yan-Ting], Zhao, H.W.[Hong-Wei], Huang, Y.P.[Ya-Ping],
Super-resolution guided knowledge distillation for low-resolution image classification,
PRL(155), 2022, pp. 62-68.
Elsevier DOI 2203
Low-resolution image classification, Super-resolution, Knowledge distillation BibRef

Wang, S.L.[Shu-Ling], Hu, M.[Mu], Li, B.[Bin], Gong, X.J.[Xiao-Jin],
Self-Paced Knowledge Distillation for Real-Time Image Guided Depth Completion,
SPLetters(29), No. 2022, pp. 867-871.
IEEE DOI 2204
Knowledge engineering, Predictive models, Training, Task analysis, Real-time systems, Color, Loss measurement, self-paced learning BibRef

Song, J.[Jie], Chen, Y.[Ying], Ye, J.W.[Jing-Wen], Song, M.L.[Ming-Li],
Spot-Adaptive Knowledge Distillation,
IP(31), 2022, pp. 3359-3370.
IEEE DOI 2205
Knowledge engineering, Training, Routing, Data models, Adaptation models, Deep learning, Training data, spot-adaptive distillation BibRef

Zhao, P.S.[Pei-Sen], Xie, L.X.[Ling-Xi], Wang, J.J.[Jia-Jie], Zhang, Y.[Ya], Tian, Q.[Qi],
Progressive privileged knowledge distillation for online action detection,
PR(129), 2022, pp. 108741.
Elsevier DOI 2206
Online action detection, Knowledge distillation, Privileged information, Curriculum learning BibRef

Zhao, H.R.[Hao-Ran], Sun, X.[Xin], Gao, F.[Feng], Dong, J.Y.[Jun-Yu],
Pair-Wise Similarity Knowledge Distillation for RSI Scene Classification,
RS(14), No. 10, 2022, pp. xx-yy.
DOI Link 2206
BibRef

Li, K.[Kunchi], Wan, J.[Jun], Yu, S.[Shan],
CKDF: Cascaded Knowledge Distillation Framework for Robust Incremental Learning,
IP(31), 2022, pp. 3825-3837.
IEEE DOI 2206
Task analysis, Computational modeling, Adaptation models, Data models, Training, Knowledge engineering, Feature extraction, incremental learning BibRef

Yang, D.B.[Dong-Bao], Zhou, Y.[Yu], Zhang, A.[Aoting], Sun, X.R.[Xu-Rui], Wu, D.[Dayan], Wang, W.P.[Wei-Ping], Ye, Q.X.[Qi-Xiang],
Multi-View correlation distillation for incremental object detection,
PR(131), 2022, pp. 108863.
Elsevier DOI 2208
Object detection, Incremental learning, Catastrophic forgetting, Knowledge distillation BibRef

Zhou, H.[Haonan], Du, X.P.[Xiao-Ping], Li, S.[Sen],
Self-Supervision and Self-Distillation with Multilayer Feature Contrast for Supervision Collapse in Few-Shot Remote Sensing Scene Classification,
RS(14), No. 13, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Chi, Q.[Qiang], Lv, G.H.[Guo-Hua], Zhao, G.X.[Gui-Xin], Dong, X.J.[Xiang-Jun],
A Novel Knowledge Distillation Method for Self-Supervised Hyperspectral Image Classification,
RS(14), No. 18, 2022, pp. xx-yy.
DOI Link 2209
BibRef

Wang, G.H.[Guo-Hua], Ge, Y.F.[Yi-Fan], Wu, J.X.[Jian-Xin],
Distilling Knowledge by Mimicking Features,
PAMI(44), No. 11, November 2022, pp. 8183-8195.
IEEE DOI 2210
Hash functions, Training, Standards, Residual neural networks, Radio frequency, Numerical models, Convolutional neural networks, object detection BibRef

He, Y.Y.[Yin-Yin], Wu, J.X.[Jian-Xin], Wei, X.S.[Xiu-Shen],
Distilling Virtual Examples for Long-tailed Recognition,
ICCV21(235-244)
IEEE DOI 2203
Visualization, Predictive models, Benchmark testing, Recognition and classification, BibRef

Hao, Z.W.[Zhi-Wei], Luo, Y.[Yong], Wang, Z.[Zhi], Hu, H.[Han], An, J.P.[Jian-Ping],
CDFKD-MFS: Collaborative Data-Free Knowledge Distillation via Multi-Level Feature Sharing,
MultMed(24), 2022, pp. 4262-4274.
IEEE DOI 2210
Generators, Knowledge engineering, Computational modeling, Aggregates, Predictive models, Collaboration, Model Compression, Attention BibRef

Zhao, Y.[Yibo], Liu, J.J.[Jian-Jun], Yang, J.L.[Jin-Long], Wu, Z.B.[Ze-Bin],
Remote Sensing Image Scene Classification via Self-Supervised Learning and Knowledge Distillation,
RS(14), No. 19, 2022, pp. xx-yy.
DOI Link 2210
BibRef

Tu, Z.G.[Zhi-Gang], Liu, X.J.[Xiang-Jian], Xiao, X.[Xuan],
A General Dynamic Knowledge Distillation Method for Visual Analytics,
IP(31), 2022, pp. 6517-6531.
IEEE DOI 2211
Knowledge engineering, Visualization, Task analysis, Optimization, Knowledge transfer, Image coding, Loss measurement. BibRef

Hou, J.W.[Jing-Wen], Ding, H.H.[Heng-Hui], Lin, W.S.[Wei-Si], Liu, W.[Weide], Fang, Y.M.[Yu-Ming],
Distilling Knowledge From Object Classification to Aesthetics Assessment,
CirSysVideo(32), No. 11, November 2022, pp. 7386-7402.
IEEE DOI 2211
Semantics, Computational modeling, Predictive models, Feature extraction, Training, Pattern matching, Visualization, image aesthetics assessment BibRef

Xing, S.Y.[Shi-Yi], Xing, J.S.[Jin-Sheng], Ju, J.G.[Jian-Guo], Hou, Q.S.[Qing-Shan], Ding, X.[Xiurui],
Collaborative Consistent Knowledge Distillation Framework for Remote Sensing Image Scene Classification Network,
RS(14), No. 20, 2022, pp. xx-yy.
DOI Link 2211
BibRef

Zhang, L.[Li], Wu, X.Q.[Xiang-Qian],
Latent Space Semantic Supervision Based on Knowledge Distillation for Cross-Modal Retrieval,
IP(31), 2022, pp. 7154-7164.
IEEE DOI 2212
Feature extraction, Semantics, Object detection, Correlation, Context modeling, Recurrent neural networks, Multitasking, knowledge distillation BibRef

Xu, H.T.[Hong-Teng], Liu, J.C.[Jia-Chang], Luo, D.[Dixin], Carin, L.[Lawrence],
Representing Graphs via Gromov-Wasserstein Factorization,
PAMI(45), No. 1, January 2023, pp. 999-1016.
IEEE DOI 2212
Kernel, Computational modeling, Task analysis, Message passing, Graph neural networks, Data models, Graph representation, permutation-invariance BibRef

Chen, L.Q.[Li-Qun], Wang, D.[Dong], Gan, Z.[Zhe], Liu, J.J.[Jing-Jing], Henao, R.[Ricardo], Carin, L.[Lawrence],
Wasserstein Contrastive Representation Distillation,
CVPR21(16291-16300)
IEEE DOI 2111
Knowledge engineering, Measurement, Computational modeling, Collaborative work, Robustness BibRef

Ye, H.J.[Han-Jia], Lu, S.[Su], Zhan, D.C.[De-Chuan],
Generalized Knowledge Distillation via Relationship Matching,
PAMI(45), No. 2, February 2023, pp. 1817-1834.
IEEE DOI 2301
Task analysis, Training, Neural networks, Knowledge engineering, Deep learning, Standards, Training data, Cross-Task, representation learning BibRef

Tang, R.N.[Rui-Ning], Liu, Z.Y.[Zhen-Yu], Li, Y.G.[Yang-Guang], Song, Y.[Yiguo], Liu, H.[Hui], Wang, Q.[Qide], Shao, J.[Jing], Duan, G.F.[Gui-Fang], Tan, J.R.[Jiang-Rong],
Task-balanced distillation for object detection,
PR(137), 2023, pp. 109320.
Elsevier DOI 2302
Object detection, Knowledge distillation BibRef

Xu, G.D.[Guo-Dong], Liu, Z.W.[Zi-Wei], Loy, C.C.[Chen Change],
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup,
PR(138), 2023, pp. 109338.
Elsevier DOI 2303
Knowledge distillation, Training cost BibRef

Zhang, Q.S.[Quan-Shi], Cheng, X.[Xu], Chen, Y.[Yilan], Rao, Z.[Zhefan],
Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification,
PAMI(45), No. 4, April 2023, pp. 5099-5113.
IEEE DOI 2303
Knowledge engineering, Task analysis, Measurement, Optimization, Feature extraction, Birds, Visualization, Knowledge distillation, knowledge points BibRef

Yu, X.[Xinyi], Yan, L.[Ling], Yang, Y.[Yang], Zhou, L.[Libo], Ou, L.L.[Lin-Lin],
Conditional generative data-free knowledge distillation,
IVC(131), 2023, pp. 104627.
Elsevier DOI 2303
Data-free knowledge distillation, Generative adversarial networks, Model compression, Convolutional neural networks BibRef

Su, T.T.[Tong-Tong], Liang, Q.Y.[Qi-Yu], Zhang, J.S.[Jin-Song], Yu, Z.Y.[Zhao-Yang], Xu, Z.Y.[Zi-Yue], Wang, G.[Gang], Liu, X.G.[Xiao-Guang],
Deep Cross-Layer Collaborative Learning Network for Online Knowledge Distillation,
CirSysVideo(33), No. 5, May 2023, pp. 2075-2087.
IEEE DOI 2305
Knowledge engineering, Cross layer design, Training, Feature extraction, Visualization, Collaboration, model compression BibRef

Liu, Y.F.[Yi-Fan], Shu, C.Y.[Chang-Yong], Wang, J.D.[Jing-Dong], Shen, C.H.[Chun-Hua],
Structured Knowledge Distillation for Dense Prediction,
PAMI(45), No. 6, June 2023, pp. 7035-7049.
IEEE DOI 2305
Task analysis, Semantics, Training, Object detection, Image segmentation, Estimation, Knowledge engineering, dense prediction BibRef

Zhou, W.[Wujie], Sun, F.[Fan], Jiang, Q.P.[Qiu-Ping], Cong, R.[Runmin], Hwang, J.N.[Jenq-Neng],
WaveNet: Wavelet Network With Knowledge Distillation for RGB-T Salient Object Detection,
IP(32), 2023, pp. 3027-3039.
IEEE DOI 2306
Transformers, Feature extraction, Discrete wavelet transforms, Training, Knowledge engineering, Cross layer design, edge-aware module BibRef

Tang, Y.[Yuan], Chen, Y.[Ying], Xie, L.[Linbo],
Self-knowledge distillation based on knowledge transfer from soft to hard examples,
IVC(135), 2023, pp. 104700.
Elsevier DOI 2306
Model compression, Self-knowledge distillation, Hard examples, Class probability consistency, Memory bank BibRef

Lee, H.[Hyoje], Park, Y.[Yeachan], Seo, H.[Hyun], Kang, M.[Myungjoo],
Self-knowledge distillation via dropout,
CVIU(233), 2023, pp. 103720.
Elsevier DOI 2307
Deep learning, Knowledge distillation, Self-knowledge distillation, Regularization, Dropout BibRef

Li, Z.H.[Zhi-Hui], Xu, P.F.[Peng-Fei], Chang, X.J.[Xiao-Jun], Yang, L.[Luyao], Zhang, Y.Y.[Yuan-Yuan], Yao, L.[Lina], Chen, X.J.[Xiao-Jiang],
When Object Detection Meets Knowledge Distillation: A Survey,
PAMI(45), No. 8, August 2023, pp. 10555-10579.
IEEE DOI 2307
Survey, Knowledge Distillation. Task analysis, Computational modeling, Analytical models, Image coding, Solid modeling, Object detection, weakly supervised object detection BibRef

Yang, C.G.[Chuan-Guang], An, Z.L.[Zhu-Lin], Zhou, H.[Helong], Zhuang, F.Z.[Fu-Zhen], Xu, Y.J.[Yong-Jun], Zhang, Q.[Qian],
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition,
PAMI(45), No. 8, August 2023, pp. 10212-10227.
IEEE DOI 2307
Visualization, Knowledge engineering, Training, Task analysis, Federated learning, Dogs, Computational modeling, visual recognition BibRef

Fang, H.X.[Hang-Xiang], Long, Y.[Yongwen], Hu, X.[Xinyi], Ou, Y.T.[Yang-Tao], Huang, Y.[Yuanjia], Hu, H.J.[Hao-Ji],
Dual cross knowledge distillation for image super-resolution,
JVCIR(95), 2023, pp. 103858.
Elsevier DOI 2309
Super resolution, Knowledge distillation, Convolutional neural networks BibRef

Yang, Z.[Zhen], Cao, Y.[Ying], Zhou, X.[Xin], Liu, J.[Junya], Zhang, T.[Tao], Ji, J.S.[Jin-Sheng],
Random Shuffling Data for Hyperspectral Image Classification with Siamese and Knowledge Distillation Network,
RS(15), No. 16, 2023, pp. 4078.
DOI Link 2309
BibRef

Cavazza, J.[Jacopo], Murino, V.[Vittorio], Bue, A.D.[Alessio Del],
No Adversaries to Zero-Shot Learning: Distilling an Ensemble of Gaussian Feature Generators,
PAMI(45), No. 10, October 2023, pp. 12167-12178.
IEEE DOI 2310
BibRef

Shao, R.R.[Ren-Rong], Zhang, W.[Wei], Wang, J.[Jun],
Conditional pseudo-supervised contrast for data-Free knowledge distillation,
PR(143), 2023, pp. 109781.
Elsevier DOI 2310
Model compression, Knowledge distillation, Representation learning, Contrastive learning, Privacy protection BibRef

Shao, R.R.[Ren-Rong], Zhang, W.[Wei], Yin, J.H.[Jian-Hua], Wang, J.[Jun],
Data-free Knowledge Distillation for Fine-grained Visual Categorization,
ICCV23(1515-1525)
IEEE DOI Code:
WWW Link. 2401
BibRef

López-Cifuentes, A.[Alejandro], Escudero-Viñolo, M.[Marcos], Bescós, J.[Jesús], Miguel, J.C.S.[Juan C. San],
Attention-Based Knowledge Distillation in Scene Recognition: The Impact of a DCT-Driven Loss,
CirSysVideo(33), No. 9, September 2023, pp. 4769-4783.
IEEE DOI Code:
WWW Link. 2310
BibRef

Ma, W.T.[Wen-Tao], Chen, Q.C.[Qing-Chao], Zhou, T.Q.[Tong-Qing], Zhao, S.[Shan], Cai, Z.P.[Zhi-Ping],
Using Multimodal Contrastive Knowledge Distillation for Video-Text Retrieval,
CirSysVideo(33), No. 10, October 2023, pp. 5486-5497.
IEEE DOI 2310
BibRef

Zhang, L.F.[Lin-Feng], Ma, K.[Kaisheng],
Structured Knowledge Distillation for Accurate and Efficient Object Detection,
PAMI(45), No. 12, December 2023, pp. 15706-15724.
IEEE DOI 2311
BibRef

Zhang, L.F.[Lin-Feng], Dong, R.[Runpei], Tai, H.S.[Hung-Shuo], Ma, K.[Kaisheng],
PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection,
CVPR23(21791-21801)
IEEE DOI 2309
BibRef

Yue, H.[Han], Li, J.D.[Jun-Dong], Liu, H.F.[Hong-Fu],
Second-Order Unsupervised Feature Selection via Knowledge Contrastive Distillation,
PAMI(45), No. 12, December 2023, pp. 15577-15587.
IEEE DOI 2311
BibRef

Yu, X.T.[Xiao-Tong], Sun, S.[Shiding], Tian, Y.J.[Ying-Jie],
Self-distillation and self-supervision for partial label learning,
PR(146), 2024, pp. 110016.
Elsevier DOI 2311
Knowledge distillation, Self-supervised learning, Partial label learning, Machine learning BibRef

Yang, S.Z.[Shun-Zhi], Xu, L.C.[Liu-Chi], Zhou, M.C.[Meng-Chu], Yang, X.[Xiong], Yang, J.F.[Jin-Feng], Huang, Z.H.[Zhen-Hua],
Skill-Transferring Knowledge Distillation Method,
CirSysVideo(33), No. 11, November 2023, pp. 6487-6502.
IEEE DOI 2311
BibRef

Zhao, Q.[Qi], Lyu, S.C.[Shu-Chang], Chen, L.[Lijiang], Liu, B.[Binghao], Xu, T.B.[Ting-Bing], Cheng, G.L.[Guang-Liang], Feng, W.[Wenquan],
Learn by Oneself: Exploiting Weight-Sharing Potential in Knowledge Distillation Guided Ensemble Network,
CirSysVideo(33), No. 11, November 2023, pp. 6661-6678.
IEEE DOI Code:
WWW Link. 2311
BibRef

Li, X.F.[Xiu-Fang], Sun, Q.G.[Qi-Gong], Jiao, L.C.[Li-Cheng], Liu, F.[Fang], Liu, X.[Xu], Li, L.L.[Ling-Ling], Chen, P.[Puhua], Zuo, Y.[Yi],
D^3K: Dynastic Data-Free Knowledge Distillation,
MultMed(25), 2023, pp. 8358-8371.
IEEE DOI 2312
BibRef

Wang, J.H.[Jun-Huang], Zhang, W.W.[Wei-Wei], Guo, Y.F.[Yu-Feng], Liang, P.[Peng], Ji, M.[Ming], Zhen, C.H.[Cheng-Hui], Wang, H.[Hanmeng],
Global key knowledge distillation framework,
CVIU(239), 2024, pp. 103902.
Elsevier DOI 2402
Deep learning, Knowledge distillation, Self-distillation, Convolutional neural network BibRef

Yang, A.[Aijia], Lin, S.[Sihao], Yeh, C.H.[Chung-Hsing], Shu, M.[Minglei], Yang, Y.[Yi], Chang, X.J.[Xiao-Jun],
Context Matters: Distilling Knowledge Graph for Enhanced Object Detection,
MultMed(26), 2024, pp. 487-500.
IEEE DOI 2402
Detectors, Knowledge graphs, Semantics, Object detection, Transformers, Visualization, Image edge detection, knowledge graph BibRef

Yu, H.[Hao], Feng, X.[Xin], Wang, Y.L.[Yun-Long],
Enhancing deep feature representation in self-knowledge distillation via pyramid feature refinement,
PRL(178), 2024, pp. 35-42.
Elsevier DOI Code:
WWW Link. 2402
Self-knowledge distillation, Feature representation, Pyramid structure, Deep neural networks BibRef

Bao, Z.Q.[Zhi-Qiang], Chen, Z.H.[Zi-Hao], Wang, C.D.[Chang-Dong], Zheng, W.S.[Wei-Shi], Huang, Z.H.[Zhen-Hua], Chen, Y.[Yunwen],
Post-Distillation via Neural Resuscitation,
MultMed(26), 2024, pp. 3046-3060.
IEEE DOI 2402
Neurons, Computational modeling, Standards, Optimization, Knowledge engineering, Task analysis, Probabilistic logic, transfer learning BibRef

Ma, X.[Xin], Liu, C.[Chang], Xie, C.Y.[Chun-Yu], Ye, L.[Long], Deng, Y.F.[Ya-Feng], Ji, X.Y.[Xiang-Yang],
Disjoint Masking With Joint Distillation for Efficient Masked Image Modeling,
MultMed(26), 2024, pp. 3077-3087.
IEEE DOI 2402
Training, Image reconstruction, Predictive models, Task analysis, Visualization, Convergence, Computational modeling, and training efficiency BibRef

Xu, Z.[Zhi], Fu, Z.Y.[Zhen-Yong],
Using Mixture of Experts to accelerate dataset distillation,
JVCIR(100), 2024, pp. 104137.
Elsevier DOI 2405
Dataset distillation, Mixture of experts, Accelerate BibRef

Mei, Z.[Zhen], Ye, P.[Peng], Li, B.[Baopu], Chen, T.[Tao], Fan, J.Y.[Jia-Yuan], Ouyang, W.L.[Wan-Li],
DeNKD: Decoupled Non-Target Knowledge Distillation for Complementing Transformer-Based Unsupervised Domain Adaptation,
CirSysVideo(34), No. 5, May 2024, pp. 3220-3231.
IEEE DOI 2405
Transformers, Task analysis, Semantics, Adaptation models, Knowledge transfer, Visualization, Training, Transformer, knowledge distillation BibRef

Liang, G.Q.[Guo-Qiang], Chen, Z.J.[Zhao-Jie], Chen, Z.Q.[Zhao-Qiang], Ji, S.Y.[Shi-Yu], Zhang, Y.N.[Yan-Ning],
New Insights on Relieving Task-Recency Bias for Online Class Incremental Learning,
CirSysVideo(34), No. 5, May 2024, pp. 3451-3464.
IEEE DOI Code:
WWW Link. 2405
Task analysis, Data models, Training, Streaming media, Stability criteria, Predictive models, Circuit stability, virtual knowledge distillation BibRef

Xu, L.[Liuchi], Ren, J.[Jin], Huang, Z.H.[Zhen-Hua], Zheng, W.S.[Wei-Shi], Chen, Y.[Yunwen],
Improving Knowledge Distillation via Head and Tail Categories,
CirSysVideo(34), No. 5, May 2024, pp. 3465-3480.
IEEE DOI 2405
Tail, Head, Knowledge transfer, Task analysis, Knowledge engineering, Image classification, Training, Knowledge distillation, instance segmentation BibRef

Jang, J.Y.[Jae-Yeon],
Synthetic unknown class learning for learning unknowns,
PR(153), 2024, pp. 110560.
Elsevier DOI 2405
Open set recognition, Overgeneralization, Knowledge distillation, Generative adversarial learning, Unknown BibRef

Zhu, S.L.[Song-Ling], Shang, R.H.[Rong-Hua], Yuan, B.[Bo], Zhang, W.[Weitong], Li, W.J.[Wen-Jie], Li, Y.Y.[Yang-Yang], Jiao, L.C.[Li-Cheng],
DynamicKD: An effective knowledge distillation via dynamic entropy correction-based distillation for gap optimizing,
PR(153), 2024, pp. 110545.
Elsevier DOI 2405
Convolutional neural networks, Knowledge distillation, CNN compression, CNN acceleration BibRef

Guo, Z.[Zhen], Zhang, P.Z.[Peng-Zhou], Liang, P.[Peng],
SAKD: Sparse attention knowledge distillation,
IVC(146), 2024, pp. 105020.
Elsevier DOI 2405
Knowledge distillation, Attention mechanisms, Sparse attention mechanisms BibRef

Li, C.[Cong], Cheng, G.[Gong], Han, J.W.[Jun-Wei],
Boosting Knowledge Distillation via Intra-Class Logit Distribution Smoothing,
CirSysVideo(34), No. 6, June 2024, pp. 4190-4201.
IEEE DOI Code:
WWW Link. 2406
Training, Smoothing methods, Analytical models, Standards, Data models, Correlation, Boosting, Knowledge distillation, image classification BibRef

Zhang, S.[Sha], Deng, J.J.[Jia-Jun], Bai, L.[Lei], Li, H.Q.[Hou-Qiang], Ouyang, W.L.[Wan-Li], Zhang, Y.Y.[Yan-Yong],
HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation,
IJCV(132), No. 7, July 2024, pp. Pages2585-2599.
Springer DOI 2406
BibRef

Wang, Y.[Yang], Qian, B.[Biao], Liu, H.P.[Hai-Peng], Rui, Y.[Yong], Wang, M.[Meng],
Unpacking the Gap Box Against Data-Free Knowledge Distillation,
PAMI(46), No. 9, September 2024, pp. 6280-6291.
IEEE DOI 2408
Training, Art, Data models, Analytical models, Knowledge engineering, Generators, Data-free knowledge distillation, derived gap, inherent gap BibRef

Li, S.Y.[Shu-Yi], Hu, H.C.[Hong-Chao], Huo, S.[Shumin], Liang, H.[Hao],
Clean, performance-robust, and performance-sensitive historical information based adversarial self-distillation,
IET-CV(18), No. 5, 2024, pp. 591-612.
DOI Link 2408
architecture, convolutional neural nets, image classification, image sampling, image sequences BibRef

Zhang, W.W.[Wei-Wei], Guo, Y.F.[Yu-Feng], Wang, J.[Junhuang], Zhu, J.Q.[Jian-Qing], Zeng, H.Q.[Huan-Qiang],
Collaborative Knowledge Distillation,
CirSysVideo(34), No. 8, August 2024, pp. 7601-7613.
IEEE DOI 2408
Knowledge engineering, Training, Feature extraction, Uncertainty, Correlation, Collaboration, Circuits and systems, deep learning BibRef

Li, X.[Xiufang], Jiao, L.C.[Li-Cheng], Sun, Q.[Qigong], Liu, F.[Fang], Liu, X.[Xu], Li, L.L.[Ling-Ling], Chen, P.[Puhua], Yang, S.Y.[Shu-Yuan],
A Category-Aware Curriculum Learning for Data-Free Knowledge Distillation,
MultMed(26), 2024, pp. 9603-9618.
IEEE DOI 2410
Generators, Training, Knowledge engineering, Data models, Training data, Task analysis, Monitoring, Data generation, image classification BibRef

Wu, J.[Jie], Fang, L.Y.[Le-Yuan], Yue, J.[Jun],
TAKD: Target-Aware Knowledge Distillation for Remote Sensing Scene Classification,
CirSysVideo(34), No. 9, September 2024, pp. 8188-8200.
IEEE DOI 2410
Scene classification, Feature extraction, Computational modeling, Training, Heating systems, Semantics, lightweight model BibRef

Li, C.[Chuan], Teng, X.[Xiao], Ding, Y.[Yan], Lan, L.[Long],
Instance-Level Scaling and Dynamic Margin-Alignment Knowledge Distillation for Remote Sensing Image Scene Classification,
RS(16), No. 20, 2024, pp. 3853.
DOI Link 2411
BibRef

Akmel, F.[Feidu], Meng, F.M.[Fan-Man], Liu, M.Y.[Ming-Yu], Zhang, R.T.[Run-Tong], Teka, A.[Asebe], Lemuye, E.[Elias],
Few-shot class incremental learning via prompt transfer and knowledge distillation,
IVC(151), 2024, pp. 105251.
Elsevier DOI 2411
Knowledge distillation, Prompting, Few-shot learning, Incremental learning BibRef

Pei, S.[Shaotong], Zhang, H.[Hangyuan], Zhu, Y.X.[Yu-Xin], Hu, C.[Chenlong],
Lightweight transmission line defect identification method based on OFN network and distillation method,
IET-IPR(18), No. 12, 2024, pp. 3518-3529.
DOI Link 2411
convolutional neural nets, image recognition, insulators, object detection BibRef

Xu, K.[Kai], Wang, L.C.[Li-Chun], Li, S.[Shuang], Xin, J.[Jianjia], Yin, B.C.[Bao-Cai],
Self-Distillation With Augmentation in Feature Space,
CirSysVideo(34), No. 10, October 2024, pp. 9578-9590.
IEEE DOI 2411
Self-distillation does not require a pre-trained teacher network. Feature extraction, Task analysis, Training, Knowledge engineering, Data augmentation, Extrapolation, Predictive models, generalization performance BibRef

Wang, G.T.[Guang-Tai], Huang, J.T.[Jin-Tao], Lai, Y.Q.[Yi-Qiang], Vong, C.M.[Chi-Man],
Dealing with partial labels by knowledge distillation,
PR(158), 2025, pp. 110965.
Elsevier DOI 2411
Partial label learning, Knowledge distillation, Over-confidence BibRef

Salamah, A.H.[Ahmed H.], Hamidi, S.M.[Shayan Mohajer], Yang, E.H.[En-Hui],
A coded knowledge distillation framework for image classification based on adaptive JPEG encoding,
PR(158), 2025, pp. 110966.
Elsevier DOI 2411
BibRef

Li, M.S.[Ming-Sheng], Zhang, L.[Lin], Zhu, M.Z.[Ming-Zhen], Huang, Z.L.[Zi-Long], Yu, G.[Gang], Fan, J.Y.[Jia-Yuan], Chen, T.[Tao],
Lightweight Model Pre-Training via Language Guided Knowledge Distillation,
MultMed(26), 2024, pp. 10720-10730.
IEEE DOI 2411
Visualization, Semantics, Task analysis, Feature extraction, Training, Computational modeling, Image segmentation, visual semantics banks BibRef


Jang, J.Y.[Ji-Yong], Lee, H.[Hayeon], Lee, Y.[Younkwan],
Disentangled Knowledge Distillation for Unified Multi-Class Anomaly Detection,
ICIP24(312-318)
IEEE DOI 2411
Training, Location awareness, Adaptation models, Image transformation, Scalability, Inspection, Benchmark testing, VisA BibRef

Dong, J.H.[Jun-Hao], Koniusz, P.[Piotr], Chen, J.X.[Jun-Xi], Wang, Z.J.[Z. Jane], Ong, Y.S.[Yew-Soon],
Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples,
CVPR24(28432-28442)
IEEE DOI 2410
Degradation, Adaptation models, Upper bound, Robustness, Probability distribution, Distance measurement, Adversarial learning BibRef

Wei, S.[Shicai], Luo, C.[Chunbo], Luo, Y.[Yang],
Scale Decoupled Distillation,
CVPR24(15975-15983)
IEEE DOI Code:
WWW Link. 2410
Correlation, Codes, Semantics, Pipelines, Benchmark testing, Knowlegde Distillation BibRef

Huo, F.[Fushuo], Xu, W.C.[Wen-Chao], Guo, J.[Jingcai], Wang, H.Z.[Hao-Zhao], Guo, S.[Song],
C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation,
CVPR24(16006-16015)
IEEE DOI 2410
Measurement, Knowledge transfer BibRef

Miles, R.[Roy], Elezi, I.[Ismail], Deng, J.K.[Jian-Kang],
V_kD: Improving Knowledge Distillation Using Orthogonal Projections,
CVPR24(15720-15730)
IEEE DOI Code:
WWW Link. 2410
Training, Deep learning, Image synthesis, Object detection, Transformer cores, Transformers, Knowledge distillation, Explainable AI BibRef

Sun, S.Q.[Shang-Quan], Ren, W.Q.[Wen-Qi], Li, J.Z.[Jing-Zhi], Wang, R.[Rui], Cao, X.C.[Xiao-Chun],
Logit Standardization in Knowledge Distillation,
CVPR24(15731-15740)
IEEE DOI 2410
Temperature distribution, Codes, Pipelines, Toy manufacturing industry, Entropy, Image Classification BibRef

Zhang, Y.[Yuan], Huang, T.[Tao], Liu, J.[JiaMing], Jiang, T.[Tao], Cheng, K.[Kuan], Zhang, S.H.[Shang-Hang],
FreeKD: Knowledge Distillation via Semantic Frequency Prompt,
CVPR24(15931-15940)
IEEE DOI 2410
Location awareness, Degradation, Sensitivity, Frequency-domain analysis, Semantics, Pipelines, Noise, SAM BibRef

Li, M.C.[Ming-Cheng], Yang, D.K.[Ding-Kang], Zhao, X.[Xiao], Wang, S.B.[Shuai-Bing], Wang, Y.[Yan], Yang, K.[Kun], Sun, M.Y.[Ming-Yang], Kou, D.L.[Dong-Liang], Qian, Z.Y.[Zi-Yun], Zhang, L.H.[Li-Hua],
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities,
CVPR24(12458-12468)
IEEE DOI 2410
Sentiment analysis, Correlation, Semantics, Refining, Prototypes, Contrastive learning, Multimodal sentiment analysis, Incomplete multimodal learning BibRef

Wang, Y.Z.[Yu-Zheng], Yang, D.[Dingkang], Chen, Z.Y.[Zhao-Yu], Liu, Y.[Yang], Liul, S.[Siao], Zhang, W.Q.[Wen-Qiang], Zhang, L.H.[Li-Hua], Qi, L.[Lizhe],
De-Confounded Data-Free Knowledge Distillation for Handling Distribution Shifts,
CVPR24(12615-12625)
IEEE DOI 2410
Accuracy, Training data, Cause effect analysis, Data models, Data-Free Knowledge Distillation, Causal Inference BibRef

Daultani, D.[Dinesh], Tanaka, M.[Masayuki], Okutomi, M.[Masatoshi], Endo, K.[Kazuki],
Diffusion-Based Adaptation for Classification of Unknown Degraded Images,
NTIRE24(5982-5991)
IEEE DOI 2410
Degradation, Training, Adaptation models, Transforms, Performance gain, Diffusion models, Transformers, ML Robustness, Knowledge Distillation BibRef

Yin, T.W.[Tian-Wei], Gharbi, M.[Michaël], Zhang, R.[Richard], Shechtman, E.[Eli], Durand, F.[Frédo], Freeman, W.T.[William T.], Park, T.[Taesung],
One-Step Diffusion with Distribution Matching Distillation,
CVPR24(6613-6623)
IEEE DOI 2410
Image quality, Computational modeling, Transforms, Diffusion models, Generators, Hardware, image generation, generative model BibRef

Kim, S.[Sanghwan], Tang, H.[Hao], Yu, F.[Fisher],
Distilling ODE Solvers of Diffusion Models into Smaller Steps,
CVPR24(9410-9419)
IEEE DOI 2410
Training, Image quality, Visualization, Limiting, Ordinary differential equations, Diffusion models, Knowledge distillation BibRef

Han, K.[Keonhee], Muhle, D.[Dominik], Wimbauer, F.[Felix], Cremers, D.[Daniel],
Boosting Self-Supervision for Single-View Scene Completion via Knowledge Distillation,
CVPR24(9837-9847)
IEEE DOI 2410
Geometry, Solid modeling, Fuses, Computational modeling, Estimation, Single-View-Reconstruction, Depth Estimation BibRef

Ma, J.[Jing], Xiang, X.[Xiang], Wang, K.[Ke], Wu, Y.C.[Yu-Chuan], Li, Y.B.[Yong-Bin],
Aligning Logits Generatively for Principled Black-Box Knowledge Distillation,
CVPR24(23148-23157)
IEEE DOI 2410
Computational modeling, Closed box, Generative adversarial networks, Generators, Data models, Black-Box Knowledge Distillation BibRef

Tran, M.T.[Minh-Tuan], Le, T.[Trung], Le, X.M.[Xuan-May], Harandi, M.[Mehrtash], Tran, Q.H.[Quan Hung], Phung, D.[Dinh],
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation,
CVPR24(23860-23869)
IEEE DOI Code:
WWW Link. 2410
Training, Knowledge engineering, Codes, Noise, Neural networks, knowledge transfer, data-free, text embedding BibRef

Jung, J.W.[Jae-Won], Jang, H.[Hongsun], Song, J.[Jaeyong], Lee, J.H.[Jin-Ho],
PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor,
CVPR24(24482-24491)
IEEE DOI Code:
WWW Link. 2410
Accuracy, Codes, Computer network reliability, Computational modeling, Neural networks, Robustness, Deep learning BibRef

Yin, S.L.[Sheng-Lin], Xiao, Z.[Zhen], Song, M.X.[Ming-Xuan], Long, J.[Jieyi],
Adversarial Distillation Based on Slack Matching and Attribution Region Alignment,
CVPR24(24605-24614)
IEEE DOI 2410
Training, Computational modeling, Impedance matching, Face recognition, Predictive models, Robustness BibRef

Liu, H.[He], Wang, Y.K.[Yi-Kai], Liu, H.P.[Hua-Ping], Sun, F.C.[Fu-Chun], Yao, A.[Anbang],
Small Scale Data-Free Knowledge Distillation,
CVPR24(6008-6016)
IEEE DOI Code:
WWW Link. 2410
Training, Knowledge engineering, Semantic segmentation, Training data, Reinforcement learning, Benchmark testing, Data-free BibRef

Ni, J.[Jianyuan], Tang, H.[Hao], Shang, Y.Z.[Yu-Zhang], Duan, B.[Bin], Yan, Y.[Yan],
Adaptive Cross-Architecture Mutual Knowledge Distillation,
FG24(1-5)
IEEE DOI 2408
Knowledge engineering, Training, Adaptation models, Accuracy, Face recognition, Gesture recognition, Complex networks BibRef

Pham, C.[Cuong], Nguyen, V.A.[Van-Anh], Le, T.[Trung], Phung, D.[Dinh], Carneiro, G.[Gustavo], Do, T.T.[Thanh-Toan],
Frequency Attention for Knowledge Distillation,
WACV24(2266-2275)
IEEE DOI 2404
Knowledge engineering, Frequency-domain analysis, Computational modeling, Object detection, Computer architecture, Embedded sensing / real-time techniques BibRef

Lan, Q.Z.[Qi-Zhen], Tian, Q.[Qing],
Gradient-Guided Knowledge Distillation for Object Detectors,
WACV24(423-432)
IEEE DOI Code:
WWW Link. 2404
Deep learning, Codes, Computational modeling, Object detection, Detectors, Feature extraction, Algorithms BibRef

Reddy, N.[Nikhil], Baktashmotlagh, M.[Mahsa], Arora, C.[Chetan],
Domain-Aware Knowledge Distillation for Continual Model Generalization,
WACV24(685-696)
IEEE DOI 2404
Adaptation models, Computational modeling, Prototypes, Artificial neural networks, Predictive models, Synthetic data, Autonomous Driving BibRef

Huang, J.Q.[Jun-Qiang], Guo, Z.C.[Zi-Chao],
Pixel-Wise Contrastive Distillation,
ICCV23(16313-16323)
IEEE DOI 2401
BibRef

Lebailly, T.[Tim], Stegmüller, T.[Thomas], Bozorgtabar, B.[Behzad], Thiran, J.P.[Jean-Philippe], Tuytelaars, T.[Tinne],
Adaptive Similarity Bootstrapping for Self-Distillation based Representation Learning,
ICCV23(16459-16468)
IEEE DOI Code:
WWW Link. 2401
BibRef

Yang, L.[Longrong], Zhou, X.[Xianpan], Li, X.[Xuewei], Qiao, L.[Liang], Li, Z.[Zheyang], Yang, Z.W.[Zi-Wei], Wang, G.[Gaoang], Li, X.[Xi],
Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection,
ICCV23(17129-17138)
IEEE DOI Code:
WWW Link. 2401
BibRef

Liu, Z.W.[Zi-Wei], Wang, Y.T.[Yong-Tao], Chu, X.J.[Xiao-Jie], Dong, N.[Nan], Qi, S.X.[Sheng-Xiang], Ling, H.B.[Hai-Bin],
A Simple and Generic Framework for Feature Distillation via Channel-wise Transformation,
REDLCV23(1121-1130)
IEEE DOI 2401
BibRef

Lao, S.S.[Shan-Shan], Song, G.[Guanglu], Liu, B.[Boxiao], Liu, Y.[Yu], Yang, Y.[Yujiu],
UniKD: Universal Knowledge Distillation for Mimicking Homogeneous or Heterogeneous Object Detectors,
ICCV23(6339-6349)
IEEE DOI 2401
BibRef

Sun, X.M.[Xi-Meng], Zhang, P.C.[Peng-Chuan], Zhang, P.Z.[Pei-Zhao], Shah, H.[Hardik], Saenko, K.[Kate], Xia, X.[Xide],
DIME-FM: DIstilling Multimodal and Efficient Foundation Models,
ICCV23(15475-15487)
IEEE DOI 2401
BibRef

Radwan, A.[Ahmed], Shehata, M.S.[Mohamed S.],
Distilling Part-whole Hierarchical Knowledge from a Huge Pretrained Class Agnostic Segmentation Framework,
VIPriors23(238-246)
IEEE DOI Code:
WWW Link. 2401
BibRef

Tang, J.L.[Jia-Liang], Chen, S.[Shuo], Niu, G.[Gang], Sugiyama, M.[Masashi], Gong, C.[Chen],
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images,
ICCV23(17424-17434)
IEEE DOI 2401
BibRef

Li, L.[Lujun], Dong, P.[Peijie], Wei, Z.[Zimian], Yang, Y.[Ya],
Automated Knowledge Distillation via Monte Carlo Tree Search,
ICCV23(17367-17378)
IEEE DOI Code:
WWW Link. 2401
BibRef

Choi, J.Y.[Jun-Yong], Cho, H.[Hyeon], Cheung, S.[Seokhwa], Hwang, W.J.[Won-Jun],
ORC: Network Group-based Knowledge Distillation using Online Role Change,
ICCV23(17335-17344)
IEEE DOI 2401
BibRef

Yang, P.H.[Peng-Hui], Xie, M.K.[Ming-Kun], Zong, C.C.[Chen-Chen], Feng, L.[Lei], Niu, G.[Gang], Sugiyama, M.[Masashi], Huang, S.J.[Sheng-Jun],
Multi-Label Knowledge Distillation,
ICCV23(17225-17234)
IEEE DOI Code:
WWW Link. 2401
BibRef

Yang, Z.D.[Zhen-Dong], Zeng, A.[Ailing], Li, Z.[Zhe], Zhang, T.[Tianke], Yuan, C.[Chun], Li, Y.[Yu],
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels,
ICCV23(17139-17148)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gu, P.Y.[Pei-Yan], Zhang, C.[Chuyu], Xu, R.J.[Rui-Jie], He, X.M.[Xu-Ming],
Class-relation Knowledge Distillation for Novel Class Discovery,
ICCV23(16428-16437)
IEEE DOI 2401
BibRef

Gu, Z.H.[Zhi-Hao], Liu, L.[Liang], Chen, X.[Xu], Yi, R.[Ran], Zhang, J.N.[Jiang-Ning], Wang, Y.[Yabiao], Wang, C.J.[Cheng-Jie], Shu, A.[Annan], Jiang, G.[Guannan], Ma, L.Z.[Li-Zhuang],
Remembering Normality: Memory-guided Knowledge Distillation for Unsupervised Anomaly Detection,
ICCV23(16355-16363)
IEEE DOI 2401
BibRef

Dong, J.F.[Jian-Feng], Zhang, M.[Minsong], Zhang, Z.[Zheng], Chen, X.[Xianke], Liu, D.[Daizong], Qu, X.Y.[Xiao-Ye], Wang, X.[Xun], Liu, B.[Baolong],
Dual Learning with Dynamic Knowledge Distillation for Partially Relevant Video Retrieval,
ICCV23(11268-11278)
IEEE DOI 2401
BibRef

Zhao, B.[Borui], Cui, Q.[Quan], Song, R.J.[Ren-Jie], Liang, J.J.[Jia-Jun],
DOT: A Distillation-Oriented Trainer,
ICCV23(6166-6175)
IEEE DOI Code:
WWW Link. 2401
BibRef

Zhao, B.[Borui], Song, R.J.[Ren-Jie], Liang, J.J.[Jia-Jun],
Cumulative Spatial Knowledge Distillation for Vision Transformers,
ICCV23(6123-6132)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gao, T.W.[Ting-Wei], Long, R.[Rujiao],
Accumulation Knowledge Distillation for Conditional GAN Compression,
REDLCV23(1294-1303)
IEEE DOI 2401
BibRef

Bender, S.[Sidney], Anders, C.J.[Christopher J.], Chormai, P.[Pattarawat], Marxfeld, H.[Heike], Herrmann, J.[Jan], Montavon, G.[Grégoire],
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation,
CVAMD23(2599-2607)
IEEE DOI 2401
BibRef

Wang, Q.[Qi], Liu, L.[Lu], Yu, W.X.[Wen-Xin], Chen, S.Y.[Shi-Yu], Gong, J.[Jun], Chen, P.[Peng],
BCKD: Block-Correlation Knowledge Distillation,
ICIP23(3225-3229)
IEEE DOI 2312
BibRef

Sasaya, T.[Tenta], Watanabe, T.[Takashi], Ida, T.[Takashi], Ono, T.[Toshiyuki],
Simple Self-Distillation Learning for Noisy Image Classification,
ICIP23(795-799)
IEEE DOI 2312
BibRef

Zhang, Y.[Yi], Gao, Y.K.[Ying-Ke], Zhang, H.N.[Hao-Nan], Lei, X.Y.[Xin-Yu], Liu, L.J.[Long-Jun],
Cross-Layer Patch Alignment and Intra-and-Inter Patch Relations for Knowledge Distillation,
ICIP23(535-539)
IEEE DOI 2312
BibRef

Wang, C.C.[Chien-Chih], Xu, S.Y.[Shao-Yuan], Fu, J.M.[Jin-Miao], Liu, Y.[Yang], Wang, B.[Bryan],
KD-Fixmatch: Knowledge Distillation Siamese Neural Networks,
ICIP23(341-345)
IEEE DOI 2312
BibRef

Jin, Y.[Ying], Wang, J.Q.[Jia-Qi], Lin, D.[Dahua],
Multi-Level Logit Distillation,
CVPR23(24276-24285)
IEEE DOI 2309
BibRef

Zhmoginov, A.[Andrey], Sandler, M.[Mark], Miller, N.[Nolan], Kristiansen, G.[Gus], Vladymyrov, M.[Max],
Decentralized Learning with Multi-Headed Distillation,
CVPR23(8053-8063)
IEEE DOI 2309
BibRef

Tastan, N.[Nurbek], Nandakumar, K.[Karthik],
CaPriDe Learning: Confidential and Private Decentralized Learning Based on Encryption-Friendly Distillation Loss,
CVPR23(8084-8092)
IEEE DOI 2309
BibRef

Liu, G.W.[Gao-Wen], Shang, Y.Z.[Yu-Zhang], Yao, Y.G.[Yu-Guang], Kompella, R.[Ramana],
Network Specialization via Feature-level Knowledge Distillation,
VOCVALC23(3368-3375)
IEEE DOI 2309
BibRef

Zhang, T.[Tianli], Xue, M.Q.[Meng-Qi], Zhang, J.T.[Jiang-Tao], Zhang, H.F.[Hao-Fei], Wang, Y.[Yu], Cheng, L.[Lechao], Song, J.[Jie], Song, M.L.[Ming-Li],
Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation,
CVPR23(20176-20185)
IEEE DOI 2309
BibRef

Li, J.Z.[Jing-Zhi], Guo, Z.D.[Zi-Dong], Li, H.[Hui], Han, S.[Seungju], Baek, J.W.[Ji-Won], Yang, M.[Min], Yang, R.[Ran], Suh, S.[Sungjoo],
Rethinking Feature-based Knowledge Distillation for Face Recognition,
CVPR23(20156-20165)
IEEE DOI 2309
BibRef

Lin, H.[Han], Han, G.X.[Guang-Xing], Ma, J.W.[Jia-Wei], Huang, S.Y.[Shi-Yuan], Lin, X.D.[Xu-Dong], Chang, S.F.[Shih-Fu],
Supervised Masked Knowledge Distillation for Few-Shot Transformers,
CVPR23(19649-19659)
IEEE DOI 2309
BibRef

Du, X.[Xuanyi], Wan, W.T.[Wei-Tao], Sun, C.[Chong], Li, C.[Chen],
Weak-shot Object Detection through Mutual Knowledge Transfer,
CVPR23(19671-19680)
IEEE DOI 2309
BibRef

Shen, Y.Q.[Yan-Qing], Zhou, S.P.[San-Ping], Fu, J.W.[Jing-Wen], Wang, R.T.[Ruo-Tong], Chen, S.T.[Shi-Tao], Zheng, N.N.[Nan-Ning],
StructVPR: Distill Structural Knowledge with Weighting Samples for Visual Place Recognition,
CVPR23(11217-11226)
IEEE DOI 2309
BibRef

Xu, Q.[Qi], Li, Y.X.[Ya-Xin], Shen, J.[Jiangrong], Liu, J.K.[Jian K.], Tang, H.[Huajin], Pan, G.[Gang],
Constructing Deep Spiking Neural Networks from Artificial Neural Networks with Knowledge Distillation,
CVPR23(7886-7895)
IEEE DOI 2309
BibRef

Patel, G.[Gaurav], Mopuri, K.R.[Konda Reddy], Qiu, Q.[Qiang],
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation,
CVPR23(7786-7794)
IEEE DOI 2309
BibRef

Chen, Y.Z.[Yi-Zhuo], Liang, K.[Kaizhao], Zeng, Z.[Zhe], Yao, S.[Shuochao], Shao, H.[Huajie],
A Unified Knowledge Distillation Framework for Deep Directed Graphical Models,
CVPR23(7795-7804)
IEEE DOI 2309
BibRef

Cui, K.W.[Kai-Wen], Yu, Y.C.[Ying-Chen], Zhan, F.N.[Fang-Neng], Liao, S.C.[Sheng-Cai], Lu, S.J.[Shi-Jian], Xing, E.[Eric],
KD-DLGAN: Data Limited Image Generation via Knowledge Distillation,
CVPR23(3872-3882)
IEEE DOI 2309
BibRef

Xu, J.Q.[Jian-Qing], Li, S.[Shen], Deng, A.[Ailin], Xiong, M.[Miao], Wu, J.Y.[Jia-Ying], Wu, J.X.[Jia-Xiang], Ding, S.H.[Shou-Hong], Hooi, B.[Bryan],
Probabilistic Knowledge Distillation of Face Ensembles,
CVPR23(3489-3498)
IEEE DOI 2309
BibRef

Guo, Z.[Ziyao], Yan, H.[Haonan], Li, H.[Hui], Lin, X.D.[Xiao-Dong],
Class Attention Transfer Based Knowledge Distillation,
CVPR23(11868-11877)
IEEE DOI 2309
BibRef

Song, K.[Kaiyou], Zhang, S.[Shan], Luo, Z.[Zimeng], Wang, T.[Tong], Xie, J.[Jin],
Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning,
ICCV23(16053-16062)
IEEE DOI 2401
BibRef

Song, K.[Kaiyou], Xie, J.[Jin], Zhang, S.[Shan], Luo, Z.[Zimeng],
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning,
CVPR23(11848-11857)
IEEE DOI 2309
BibRef

Yu, S.K.[Shi-Kang], Chen, J.C.[Jia-Chen], Han, H.[Hu], Jiang, S.Q.[Shu-Qiang],
Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint,
CVPR23(24266-24275)
IEEE DOI 2309
BibRef

Shrivastava, A.[Aman], Qi, Y.J.[Yan-Jun], Ordonez, V.[Vicente],
Estimating and Maximizing Mutual Information for Knowledge Distillation,
FaDE-TCV23(48-57)
IEEE DOI 2309
BibRef

Gao, L.[Lei], Gao, H.[Hui],
Feature Decoupled Knowledge Distillation via Spatial Pyramid Pooling,
ACCV22(VI:732-745).
Springer DOI 2307
BibRef

Lv, Y.[Yuan], Xu, Y.J.[Ya-Jing], Wang, S.[Shusen], Ma, Y.J.[Ying-Jian], Wang, D.[Dengke],
Continuous Self-Study: Scene Graph Generation with Self-Knowledge Distillation and Spatial Augmentation,
ACCV22(V:297-315).
Springer DOI 2307
BibRef

Liu, Y.F.[Yu-Fan], Cao, J.J.[Jia-Jiong], Li, B.[Bing], Hu, W.M.[Wei-Ming], Ding, J.T.[Jing-Ting], Li, L.[Liang],
Cross-architecture Knowledge Distillation,
ACCV22(V:179-195).
Springer DOI 2307
BibRef

Lee, H.[Hojung], Lee, J.S.[Jong-Seok],
Rethinking Online Knowledge Distillation with Multi-Exits,
ACCV22(VI:408-424).
Springer DOI 2307
BibRef

Wang, L.Y.[Li-Yun], Rhodes, A.[Anthony], Feng, W.C.[Wu-Chi],
Class Specialized Knowledge Distillation,
ACCV22(II:391-408).
Springer DOI 2307
BibRef

Li, W.[Wei], Shao, S.T.[Shi-Tong], Liu, W.Y.[Wei-Yan], Qiu, Z.M.[Zi-Ming], Zhu, Z.H.[Zhi-Hao], Huan, W.[Wei],
What Role Does Data Augmentation Play in Knowledge Distillation?,
ACCV22(II:507-525).
Springer DOI 2307
BibRef

Feng, P.[Ping], Zhang, H.[Hanyun], Sun, Y.Y.[Ying-Ying], Tang, Z.J.[Zhen-Jun],
Lightweight Image Hashing Based on Knowledge Distillation and Optimal Transport for Face Retrieval,
MMMod23(II: 423-434).
Springer DOI 2304
BibRef

Ambekar, S.[Sameer], Tafuro, M.[Matteo], Ankit, A.[Ankit], van der Mast, D.[Diego], Alence, M.[Mark], Athanasiadis, C.[Christos],
Skdcgn: Source-free Knowledge Distillation of Counterfactual Generative Networks Using cgans,
VIPriors22(679-693).
Springer DOI 2304
BibRef

Lebailly, T.[Tim], Tuytelaars, T.[Tinne],
Global-Local Self-Distillation for Visual Representation Learning,
WACV23(1441-1450)
IEEE DOI 2302
Training, Representation learning, Visualization, Codes, Coherence, Task analysis, Algorithms: Machine learning architectures, and algorithms (including transfer) BibRef

Choi, H.J.[Hong-Jun], Jeon, E.S.[Eun Som], Shukla, A.[Ankita], Turaga, P.[Pavan],
Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study,
WACV23(2318-2327)
IEEE DOI 2302
Knowledge engineering, Training, Interpolation, Codes, Transfer learning, Robustness, adversarial attack and defense methods BibRef

Jacob, G.M.[Geethu Miriam], Agarwal, V.[Vishal], Stenger, B.[Björn],
Online Knowledge Distillation for Multi-task Learning,
WACV23(2358-2367)
IEEE DOI 2302
Training, Knowledge engineering, Semantic segmentation, Computational modeling, Estimation, Benchmark testing BibRef

Chen, W.C.[Wei-Chi], Chu, W.T.[Wei-Ta],
SSSD: Self-Supervised Self Distillation,
WACV23(2769-2776)
IEEE DOI 2302
Visualization, Computational modeling, Clustering algorithms, Self-supervised learning, Feature extraction, Data models, visual reasoning BibRef

Mu, M.[Michael], Bhattacharjee, S.D.[Sreyasee Das], Yuan, J.S.[Jun-Song],
Self-Supervised Distilled Learning for Multi-modal Misinformation Identification,
WACV23(2818-2827)
IEEE DOI 2302
Representation learning, Training data, Predictive models, Streaming media, Semisupervised learning, Multitasking, Vision + language and/or other modalities BibRef

Jang, J.[Jiho], Kim, S.[Seonhoon], Yoo, K.[Kiyoon], Kong, C.[Chaerin], Kim, J.[Jangho], Kwak, N.[Nojun],
Self-Distilled Self-supervised Representation Learning,
WACV23(2828-2838)
IEEE DOI 2302
Representation learning, Protocols, Codes, Statistical analysis, Self-supervised learning, Transformers, and algorithms (including transfer) BibRef

Nguyen-Duc, T.[Thanh], Le, T.[Trung], Zhao, H.[He], Cai, J.F.[Jian-Fei], Phung, D.[Dinh],
Adversarial local distribution regularization for knowledge distillation,
WACV23(4670-4679)
IEEE DOI 2302
Perturbation methods, Algorithms: Adversarial learning, adversarial attack and defense methods BibRef

Wu, Y.[Yong], Chanda, S.[Shekhor], Hosseinzadeh, M.[Mehrdad], Liu, Z.[Zhi], Wang, Y.[Yang],
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation,
WACV23(6254-6263)
IEEE DOI 2302
Training, Adaptation models, Computational modeling, Benchmark testing, Servers, visual reasoning BibRef

Hosseinzadeh, M.[Mehrdad], Wang, Y.[Yang],
Few-Shot Personality-Specific Image Captioning via Meta-Learning,
CRV23(320-327)
IEEE DOI 2406
Metalearning, Adaptation models, Protocols, Benchmark testing, Data processing, Standards, Robots, Image Captioning, few-shot learning BibRef

Iwata, S.[Sachi], Minami, S.[Soma], Hirakawa, T.[Tsubasa], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu],
Refining Design Spaces in Knowledge Distillation for Deep Collaborative Learning,
ICPR22(2371-2377)
IEEE DOI 2212
Analytical models, Federated learning, Refining, Task analysis, Knowledge transfer BibRef

Wang, C.F.[Chao-Fei], Zhang, S.W.[Shao-Wei], Song, S.[Shiji], Huang, G.[Gao],
Learn From the Past: Experience Ensemble Knowledge Distillation,
ICPR22(4736-4743)
IEEE DOI 2212
Knowledge engineering, Training, Adaptation models, Costs, Standards, Knowledge transfer BibRef

Tzelepi, M.[Maria], Symeonidis, C.[Charalampos], Nikolaidis, N.[Nikos], Tefas, A.[Anastasios],
Multilayer Online Self-Acquired Knowledge Distillation,
ICPR22(4822-4828)
IEEE DOI 2212
Training, Computational modeling, Pipelines, Estimation, Nonhomogeneous media, Probability distribution BibRef

Xu, Y.F.[Yi-Fan], Shamsolmoali, P.[Pourya], Granger, E.[Eric], Nicodeme, C.[Claire], Gardes, L.[Laurent], Yang, J.[Jie],
TransVLAD: Multi-Scale Attention-Based Global Descriptors for Visual Geo-Localization,
WACV23(2839-2848)
IEEE DOI 2302
Visualization, Codes, Computational modeling, Image retrieval, Self-supervised learning, Transformers, and un-supervised learning) BibRef

Xu, Y.F.[Yi-Fan], Shamsolmoali, P.[Pourya], Yang, J.[Jie],
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation,
ICPR22(1815-1821)
IEEE DOI 2212
Knowledge engineering, Training, Visualization, Image matching, Image retrieval, Lighting, Benchmark testing BibRef

Baek, K.[Kyungjune], Lee, S.[Seungho], Shim, H.J.[Hyun-Jung],
Learning from Better Supervision: Self-distillation for Learning with Noisy Labels,
ICPR22(1829-1835)
IEEE DOI 2212
Training, Deep learning, Filtering, Neural networks, Predictive models, Data collection, Benchmark testing BibRef

Chen, D.[Dingyao], Tan, H.[Huibin], Lan, L.[Long], Zhang, X.[Xiang], Liang, T.Y.[Tian-Yi], Luo, Z.G.[Zhi-Gang],
Frustratingly Easy Knowledge Distillation via Attentive Similarity Matching,
ICPR22(2357-2363)
IEEE DOI 2212
Knowledge engineering, Dimensionality reduction, Cross layer design, Semantics, Mobile handsets, Pattern recognition BibRef

Shen, L.[Lulan], Amara, I.[Ibtihel], Li, R.F.[Ruo-Feng], Meyer, B.[Brett], Gross, W.[Warren], Clark, J.J.[James J.],
Fast Fine-Tuning Using Curriculum Domain Adaptation,
CRV23(296-303)
IEEE DOI 2406
Training, Performance evaluation, Adaptation models, Pipelines, Computer architecture, Artificial neural networks, Task analysis, fine-tuning BibRef

Amara, I.[Ibtihel], Ziaeefard, M.[Maryam], Meyer, B.H.[Brett H.], Gross, W.[Warren], Clark, J.J.[James J.],
CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation,
ICPR22(1901-1907)
IEEE DOI 2212
Knowledge engineering, Performance evaluation, Bridges, Art, Education BibRef

Yang, Z.[Zhou], Dong, W.S.[Wei-Sheng], Li, X.[Xin], Wu, J.J.[Jin-Jian], Li, L.[Leida], Shi, G.M.[Guang-Ming],
Self-Feature Distillation with Uncertainty Modeling for Degraded Image Recognition,
ECCV22(XXIV:552-569).
Springer DOI 2211
BibRef

Tang, S.[Sanli], Zhang, Z.Y.[Zhong-Yu], Cheng, Z.Z.[Zhan-Zhan], Lu, J.[Jing], Xu, Y.L.[Yun-Lu], Niu, Y.[Yi], He, F.[Fan],
Distilling Object Detectors with Global Knowledge,
ECCV22(IX:422-438).
Springer DOI 2211
BibRef

Shen, Z.Q.[Zhi-Qiang], Xing, E.[Eric],
A Fast Knowledge Distillation Framework for Visual Recognition,
ECCV22(XXIV:673-690).
Springer DOI 2211
BibRef

Yang, C.G.[Chuan-Guang], An, Z.[Zhulin], Zhou, H.[Helong], Cai, L.H.[Lin-Hang], Zhi, X.[Xiang], Wu, J.W.[Ji-Wen], Xu, Y.J.[Yong-Jun], Zhang, Q.[Qian],
MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition,
ECCV22(XXIV:534-551).
Springer DOI 2211
BibRef

Xu, S.[Sheng], Li, Y.J.[Yan-Jing], Zeng, B.[Bohan], Ma, T.[Teli], Zhang, B.C.[Bao-Chang], Cao, X.B.[Xian-Bin], Gao, P.[Peng], Lü, J.[Jinhu],
IDa-Det: An Information Discrepancy-Aware Distillation for 1-Bit Detectors,
ECCV22(XI:346-361).
Springer DOI 2211
BibRef

Gao, Y.T.[Yu-Ting], Zhuang, J.X.[Jia-Xin], Lin, S.H.[Shao-Hui], Cheng, H.[Hao], Sun, X.[Xing], Li, K.[Ke], Shen, C.H.[Chun-Hua],
DisCo: Remedying Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning,
ECCV22(XXVI:237-253).
Springer DOI 2211
BibRef

Liu, H.[Hao], Ye, M.[Mang],
Improving Self-supervised Lightweight Model Learning via Hard-Aware Metric Distillation,
ECCV22(XXXI:295-311).
Springer DOI 2211
BibRef

Deng, X.Q.[Xue-Qing], Sun, D.W.[Da-Wei], Newsam, S.[Shawn], Wang, P.[Peng],
DistPro: Searching a Fast Knowledge Distillation Process via Meta Optimization,
ECCV22(XXXIV:218-235).
Springer DOI 2211
BibRef

Deng, X.[Xiang], Zheng, J.[Jian], Zhang, Z.F.[Zhong-Fei],
Personalized Education: Blind Knowledge Distillation,
ECCV22(XXXIV:269-285).
Springer DOI 2211
BibRef

Liu, L.Z.[Lyu-Zhuang], Hirakawa, T.[Tsubasa], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu],
Class-Wise FM-NMS for Knowledge Distillation of Object Detection,
ICIP22(1641-1645)
IEEE DOI 2211
Computational modeling, Object detection, Feature extraction, Computational efficiency, Object detection, Feature map non-maximum suppression BibRef

Shu, H.Q.[Hong-Qiao],
Two Distillation Perspectives Based on Tanimoto Coefficient,
ICIP22(1311-1315)
IEEE DOI 2211
Training, Length measurement, Task analysis, Knowledge transfer, Knowledge distillation, Tanimoto similarity matrix, Tanimoto coefficient BibRef

Pei, W.J.[Wen-Jie], Wu, S.[Shuang], Mei, D.[Dianwen], Chen, F.L.[Fang-Lin], Tian, J.[Jiandong], Lu, G.M.[Guang-Ming],
Few-Shot Object Detection by Knowledge Distillation Using Bag-of-Visual-Words Representations,
ECCV22(X:283-299).
Springer DOI 2211
BibRef

Li, C.X.[Chen-Xin], Lin, M.[Mingbao], Ding, Z.Y.[Zhi-Yuan], Lin, N.[Nie], Zhuang, Y.H.[Yi-Hong], Huang, Y.[Yue], Ding, X.H.[Xing-Hao], Cao, L.J.[Liu-Juan],
Knowledge Condensation Distillation,
ECCV22(XI:19-35).
Springer DOI 2211
BibRef

Yang, Z.D.[Zhen-Dong], Li, Z.[Zhe], Shao, M.Q.[Ming-Qi], Shi, D.[Dachuan], Yuan, Z.H.[Ze-Huan], Yuan, C.[Chun],
Masked Generative Distillation,
ECCV22(XI:53-69).
Springer DOI 2211
BibRef

Liang, J.J.[Jia-Jun], Li, L.[Linze], Bing, Z.D.[Zhao-Dong], Zhao, B.R.[Bo-Rui], Tang, Y.[Yao], Lin, B.[Bo], Fan, H.Q.[Hao-Qiang],
Efficient One Pass Self-distillation with Zipf's Label Smoothing,
ECCV22(XI:104-119).
Springer DOI 2211
BibRef

Park, J.[Jinhyuk], No, A.[Albert],
Prune Your Model Before Distill It,
ECCV22(XI:120-136).
Springer DOI 2211
BibRef

Qian, B.[Biao], Wang, Y.[Yang], Yin, H.Z.[Hong-Zhi], Hong, R.C.[Ri-Chang], Wang, M.[Meng],
Switchable Online Knowledge Distillation,
ECCV22(XI:449-466).
Springer DOI 2211
BibRef

Okamoto, N.[Naoki], Hirakawa, T.[Tsubasa], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu],
Deep Ensemble Learning by Diverse Knowledge Distillation for Fine-Grained Object Classification,
ECCV22(XI:502-518).
Springer DOI 2211
BibRef

Jang, Y.K.[Young Kyun], Gu, G.[Geonmo], Ko, B.[Byungsoo], Kang, I.[Isaac], Cho, N.I.[Nam Ik],
Deep Hash Distillation for Image Retrieval,
ECCV22(XIV:354-371).
Springer DOI 2211
BibRef

Nguyen, D.[Dang], Gupta, S.I.[Sun-Il], Do, K.[Kien], Venkatesh, S.[Svetha],
Black-Box Few-Shot Knowledge Distillation,
ECCV22(XXI:196-211).
Springer DOI 2211
BibRef

Oh, Y.J.[Yu-Jin], Ye, J.C.[Jong Chul],
CXR Segmentation by AdaIN-Based Domain Adaptation and Knowledge Distillation,
ECCV22(XXI:627-643).
Springer DOI 2211
BibRef

Lee, K.[Kyungmoon], Kim, S.[Sungyeon], Kwak, S.[Suha],
Cross-domain Ensemble Distillation for Domain Generalization,
ECCV22(XXV:1-20).
Springer DOI 2211
BibRef

Li, J.C.[Jun-Cheng], Yang, H.[Hanhui], Yi, Q.[Qiaosi], Fang, F.[Faming], Gao, G.W.[Guang-Wei], Zeng, T.Y.[Tie-Yong], Zhang, G.X.[Gui-Xu],
Multiple Degradation and Reconstruction Network for Single Image Denoising via Knowledge Distillation,
NTIRE22(557-566)
IEEE DOI 2210
Degradation, Knowledge engineering, Computational modeling, Resists, Image restoration, Noise measurement BibRef

He, R.F.[Rui-Fei], Sun, S.Y.[Shu-Yang], Yang, J.H.[Ji-Han], Bai, S.[Song], Qi, X.J.[Xiao-Juan],
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability,
CVPR22(9151-9161)
IEEE DOI 2210
Training, Codes, Computational modeling, Computer architecture, Data models, Efficient learning and inferences BibRef

Xie, P.T.[Peng-Tao], Du, X.F.[Xue-Feng],
Performance-Aware Mutual Knowledge Distillation for Improving Neural Architecture Search,
CVPR22(11912-11922)
IEEE DOI 2210
Computational modeling, Computer architecture, Optimization, Deep learning architectures and techniques BibRef

Shen, Y.Q.[Yi-Qing], Xu, L.[Liwu], Yang, Y.Z.[Yu-Zhe], Li, Y.Q.[Ya-Qian], Guo, Y.D.[Yan-Dong],
Self-Distillation from the Last Mini-Batch for Consistency Regularization,
CVPR22(11933-11942)
IEEE DOI 2210
Training, Codes, Computer network reliability, Memory management, Network architecture, Benchmark testing, Machine learning BibRef

Zhao, B.[Borui], Cui, Q.[Quan], Song, R.J.[Ren-Jie], Qiu, Y.[Yiyu], Liang, J.J.[Jia-Jun],
Decoupled Knowledge Distillation,
CVPR22(11943-11952)
IEEE DOI 2210
Training, Deep learning, Codes, Object detection, Computer architecture, Feature extraction, retrieval BibRef

Chen, X.N.[Xia-Ning], Cao, Q.[Qiong], Zhong, Y.J.[Yu-Jie], Zhang, J.[Jing], Gao, S.H.[Sheng-Hua], Tao, D.C.[Da-Cheng],
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers,
CVPR22(12042-12052)
IEEE DOI 2210
Training, Deep learning, Computational modeling, Optimization methods, Computer architecture, Transformers, Optimization methods BibRef

Yang, C.G.[Chuan-Guang], Zhou, H.[Helong], An, Z.[Zhulin], Jiang, X.[Xue], Xu, Y.J.[Yong-Jun], Zhang, Q.[Qian],
Cross-Image Relational Knowledge Distillation for Semantic Segmentation,
CVPR22(12309-12318)
IEEE DOI 2210
Image segmentation, Correlation, Codes, Shape, Semantics, Efficient learning and inferences, grouping and shape analysis BibRef

Lin, S.[Sihao], Xie, H.W.[Hong-Wei], Wang, B.[Bing], Yu, K.C.[Kai-Cheng], Chang, X.J.[Xiao-Jun], Liang, X.D.[Xiao-Dan], Wang, G.[Gang],
Knowledge Distillation via the Target-aware Transformer,
CVPR22(10905-10914)
IEEE DOI 2210
Knowledge engineering, Deep learning, Codes, Semantics, Neural networks, Computer architecture, retrieval BibRef

Yang, Z.D.[Zhen-Dong], Li, Z.[Zhe], Jiang, X.H.[Xiao-Hu], Gong, Y.[Yuan], Yuan, Z.H.[Ze-Huan], Zhao, D.[Danpei], Yuan, C.[Chun],
Focal and Global Knowledge Distillation for Detectors,
CVPR22(4633-4642)
IEEE DOI 2210
Codes, Detectors, Object detection, Feature extraction, Image classification, Efficient learning and inferences BibRef

Mal, Z.Y.[Zong-Yang], Luo, G.[Guan], Gao, J.[Jin], Li, L.[Liang], Chen, Y.X.[Yu-Xin], Wang, S.[Shaoru], Zhang, C.X.[Cong-Xuan], Hu, W.M.[Wei-Ming],
Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language Knowledge Distillation,
CVPR22(14054-14063)
IEEE DOI 2210
Training, Degradation, Vocabulary, Visualization, Semantics, Detectors, Object detection, Recognition: detection, categorization, Vision + language BibRef

Li, T.H.[Tian-Hao], Wang, L.M.[Li-Min], Wu, G.S.[Gang-Shan],
Self Supervision to Distillation for Long-Tailed Visual Recognition,
ICCV21(610-619)
IEEE DOI 2203
Training, Representation learning, Deep learning, Visualization, Image recognition, Head, Semantics, Recognition and classification, Representation learning BibRef

Fang, Z.Y.[Zhi-Yuan], Wang, J.F.[Jian-Feng], Hu, X.W.[Xiao-Wei], Wang, L.J.[Li-Juan], Yang, Y.Z.[Ye-Zhou], Liu, Z.C.[Zi-Cheng],
Compressing Visual-linguistic Model via Knowledge Distillation,
ICCV21(1408-1418)
IEEE DOI 2203
Knowledge engineering, Visualization, Adaptation models, Detectors, Mean square error methods, Transformers, Vision + language, Vision applications and systems BibRef

Yao, L.W.[Le-Wei], Pi, R.J.[Ren-Jie], Xu, H.[Hang], Zhang, W.[Wei], Li, Z.G.[Zhen-Guo], Zhang, T.[Tong],
G-DetKD: Towards General Distillation Framework for Object Detectors via Contrastive and Semantic-Guided Feature Imitation,
ICCV21(3571-3580)
IEEE DOI 2203
Semantics, Pipelines, Detectors, Object detection, Benchmark testing, Feature extraction, Detection and localization in 2D and 3D, BibRef

Chen, Y.X.[Yi-Xin], Chen, P.G.[Peng-Guang], Liu, S.[Shu], Wang, L.W.[Li-Wei], Jia, J.Y.[Jia-Ya],
Deep Structured Instance Graph for Distilling Object Detectors,
ICCV21(4339-4348)
IEEE DOI 2203
Codes, Image edge detection, Semantics, Detectors, Object detection, Knowledge representation, Detection and localization in 2D and 3D BibRef

Kim, Y.[Youmin], Park, J.[Jinbae], Jang, Y.[YounHo], Ali, M.[Muhammad], Oh, T.H.[Tae-Hyun], Bae, S.H.[Sung-Ho],
Distilling Global and Local Logits with Densely Connected Relations,
ICCV21(6270-6280)
IEEE DOI 2203
Image segmentation, Image recognition, Computational modeling, Semantics, Object detection, Task analysis, BibRef

Kim, K.[Kyungyul], Ji, B.[ByeongMoon], Yoon, D.[Doyoung], Hwang, S.[Sangheum],
Self-Knowledge Distillation with Progressive Refinement of Targets,
ICCV21(6547-6556)
IEEE DOI 2203
Training, Knowledge engineering, Adaptation models, Supervised learning, Neural networks, Object detection, Recognition and classification BibRef

Tejankar, A.[Ajinkya], Koohpayegani, S.A.[Soroush Abbasi], Pillai, V.[Vipin], Favaro, P.[Paolo], Pirsiavash, H.[Hamed],
ISD: Self-Supervised Learning by Iterative Similarity Distillation,
ICCV21(9589-9598)
IEEE DOI 2203
Codes, Transfer learning, Iterative methods, Task analysis, Standards, Representation learning, Transfer/Low-shot/Semi/Unsupervised Learning BibRef

Zhou, S.[Sheng], Wang, Y.C.[Yu-Cheng], Chen, D.F.[De-Fang], Chen, J.W.[Jia-Wei], Wang, X.[Xin], Wang, C.[Can], Bu, J.J.[Jia-Jun],
Distilling Holistic Knowledge with Graph Neural Networks,
ICCV21(10367-10376)
IEEE DOI 2203
Knowledge engineering, Correlation, Codes, Knowledge based systems, Benchmark testing, Feature extraction, BibRef

Shang, Y.Z.[Yu-Zhang], Duan, B.[Bin], Zong, Z.L.[Zi-Liang], Nie, L.Q.[Li-Qiang], Yan, Y.[Yan],
Lipschitz Continuity Guided Knowledge Distillation,
ICCV21(10655-10664)
IEEE DOI 2203
Knowledge engineering, Training, Image segmentation, Codes, NP-hard problem, Neural networks, Transfer/Low-shot/Semi/Unsupervised Learning BibRef

Li, Z.[Zheng], Ye, J.W.[Jing-Wen], Song, M.L.[Ming-Li], Huang, Y.[Ying], Pan, Z.[Zhigeng],
Online Knowledge Distillation for Efficient Pose Estimation,
ICCV21(11720-11730)
IEEE DOI 2203
Heating systems, Computational modeling, Pose estimation, Benchmark testing, Complexity theory, Knowledge transfer, Efficient training and inference methods BibRef

Dai, R.[Rui], Das, S.[Srijan], Bremond, F.[François],
Learning an Augmented RGB Representation with Cross-Modal Knowledge Distillation for Action Detection,
ICCV21(13033-13044)
IEEE DOI 2203
Training, Focusing, Streaming media, Real-time systems, Task analysis, Action and behavior recognition, Vision + other modalities BibRef

Xiang, S.[Sitao], Gu, Y.M.[Yu-Ming], Xiang, P.D.[Peng-Da], Chai, M.L.[Meng-Lei], Li, H.[Hao], Zhao, Y.J.[Ya-Jie], He, M.M.[Ming-Ming],
DisUnknown: Distilling Unknown Factors for Disentanglement Learning,
ICCV21(14790-14799)
IEEE DOI 2203
Training, Scalability, Benchmark testing, Generators, Task analysis, Image and video synthesis, Adversarial learning, Neural generative models BibRef

Diomataris, M.[Markos], Gkanatsios, N.[Nikolaos], Pitsikalis, V.[Vassilis], Maragos, P.[Petros],
Grounding Consistency: Distilling Spatial Common Sense for Precise Visual Relationship Detection,
ICCV21(15891-15900)
IEEE DOI 2203
Measurement, Visualization, Grounding, Triples (Data structure), Image edge detection, Predictive models, Visual reasoning and logical representation BibRef

Zheng, H.[Heliang], Yang, H.[Huan], Fu, J.L.[Jian-Long], Zha, Z.J.[Zheng-Jun], Luo, J.B.[Jie-Bo],
Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment,
ICCV21(10222-10231)
IEEE DOI 2203
Measurement, Image quality, Training, Knowledge engineering, Computational modeling, Semantics, Image restoration, Low-level and physics-based vision BibRef

Zheng, H.L.[He-Liang], Fu, J.L.[Jian-Long], Zha, Z.J.[Zheng-Jun], Luo, J.B.[Jie-Bo],
Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-Grained Image Recognition,
CVPR19(5007-5016).
IEEE DOI 2002
BibRef

Liu, L.[Li], Huang, Q.L.[Qing-Le], Lin, S.[Sihao], Xie, H.W.[Hong-Wei], Wang, B.[Bing], Chang, X.J.[Xiao-Jun], Liang, X.D.[Xiao-Dan],
Exploring Inter-Channel Correlation for Diversity-preserved Knowledge Distillation,
ICCV21(8251-8260)
IEEE DOI 2203
Knowledge engineering, Image segmentation, Correlation, Costs, Semantics, Graphics processing units, grouping and shape BibRef

Wang, H.[Hong], Deng, Y.F.[Yue-Fan], Yoo, S.[Shinjae], Ling, H.B.[Hai-Bin], Lin, Y.W.[Yue-Wei],
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning,
ICCV21(7638-7647)
IEEE DOI 2203
Training, Deep learning, Codes, Computational modeling, Neural networks, Bidirectional control, Adversarial learning, BibRef

Li, C.C.[Cheng-Cheng], Wang, Z.[Zi], Qi, H.R.[Hai-Rong],
Online Knowledge Distillation by Temporal-Spatial Boosting,
WACV22(3482-3491)
IEEE DOI 2202
Training, Knowledge engineering, Benchmark testing, Boosting, Noise measurement, Deep Learning Deep Learning -> Efficient Training and Inference Methods for Networks BibRef

Zheng, Z.Z.[Zhen-Zhu], Peng, X.[Xi],
Self-Guidance: Improve Deep Neural Network Generalization via Knowledge Distillation,
WACV22(3451-3460)
IEEE DOI 2202
Training, Deep learning, Knowledge engineering, Measurement, Visualization, Image recognition, Neural networks, Learning and Optimization BibRef

Zhang, H.[Heng], Fromont, E.[Elisa], Lefevre, S.[Sébastien], Avignon, B.[Bruno],
Low-cost Multispectral Scene Analysis with Modality Distillation,
WACV22(3331-3340)
IEEE DOI 2202
Knowledge engineering, Image analysis, Image resolution, Semantics, Neural networks, Thermal sensors, Predictive models, Vision Systems and Applications BibRef

Vo, D.M.[Duc Minh], Sugimoto, A.[Akihiro], Nakayama, H.[Hideki],
PPCD-GAN: Progressive Pruning and Class-Aware Distillation for Large-Scale Conditional GANs Compression,
WACV22(1422-1430)
IEEE DOI 2202
Training, Image coding, Neural network compression, Computer architecture, GANs BibRef

Kobayashi, T.[Takumi],
Extractive Knowledge Distillation,
WACV22(1350-1359)
IEEE DOI 2202
Temperature distribution, Analytical models, Annotations, Transfer learning, Feature extraction, Task analysis, Deep Learning Object Detection/Recognition/Categorization BibRef

Nguyen, C.H.[Chuong H.], Nguyen, T.C.[Thuy C.], Tang, T.N.[Tuan N.], Phan, N.L.H.[Nam L. H.],
Improving Object Detection by Label Assignment Distillation,
WACV22(1322-1331)
IEEE DOI 2202
Training, Schedules, Costs, Force, Object detection, Detectors, Switches, Object Detection/Recognition/Categorization BibRef

Meng, Z.[Ze], Yao, X.[Xin], Sun, L.F.[Li-Feng],
Multi-Task Distillation: Towards Mitigating the Negative Transfer in Multi-Task Learning,
ICIP21(389-393)
IEEE DOI 2201
Training, Degradation, Image processing, Optimization methods, Benchmark testing, Turning, Multi-task Learning, Multi-objective optimization BibRef

Tang, Q.[Qiankun], Xu, X.G.[Xiao-Gang], Wang, J.[Jun],
Differentiable Dynamic Channel Association for Knowledge Distillation,
ICIP21(414-418)
IEEE DOI 2201
Image coding, Computational modeling, Network architecture, Probabilistic logic, Computational efficiency, Task analysis, weighted distillation BibRef

Tran, V.[Vinh], Wang, Y.[Yang], Zhang, Z.K.[Ze-Kun], Hoai, M.[Minh],
Knowledge Distillation for Human Action Anticipation,
ICIP21(2518-2522)
IEEE DOI 2201
Training, Knowledge engineering, Image processing, Semantics, Neural networks, Training data BibRef

Tran, V.[Vinh], Balasubramanian, N.[Niranjan], Hoai, M.[Minh],
Progressive Knowledge Distillation for Early Action Recognition,
ICIP21(2583-2587)
IEEE DOI 2201
Knowledge engineering, Training, Recurrent neural networks, Image recognition, Training data, Semisupervised learning BibRef

Rotman, M.[Michael], Wolf, L.B.[Lior B.],
Natural Statistics of Network Activations and Implications for Knowledge Distillation,
ICIP21(399-403)
IEEE DOI 2201
Deep learning, Knowledge engineering, Image recognition, Correlation, Semantics, Benchmark testing, Knowledge Distillation, Image Statistics BibRef

Banitalebi-Dehkordi, A.[Amin],
Knowledge Distillation for Low-Power Object Detection: A Simple Technique and Its Extensions for Training Compact Models Using Unlabeled Data,
LPCV21(769-778)
IEEE DOI 2112
Training, Adaptation models, Computational modeling, Object detection, Computer architecture BibRef

Zhu, J.[Jinguo], Tang, S.X.[Shi-Xiang], Chen, D.P.[Da-Peng], Yu, S.J.[Shi-Jie], Liu, Y.K.[Ya-Kun], Rong, M.Z.[Ming-Zhe], Yang, A.[Aijun], Wang, X.H.[Xiao-Hua],
Complementary Relation Contrastive Distillation,
CVPR21(9256-9265)
IEEE DOI 2111
Benchmark testing, Mutual information BibRef

Jung, S.[Sangwon], Lee, D.G.[Dong-Gyu], Park, T.[Taeeon], Moon, T.[Taesup],
Fair Feature Distillation for Visual Recognition,
CVPR21(12110-12119)
IEEE DOI 2111
Visualization, Systematics, Computational modeling, Face recognition, Predictive models, Prediction algorithms BibRef

Ghosh, P.[Pallabi], Saini, N.[Nirat], Davis, L.S.[Larry S.], Shrivastava, A.[Abhinav],
Learning Graphs for Knowledge Transfer with Limited Labels,
CVPR21(11146-11156)
IEEE DOI 2111
Training, Visualization, Convolution, Semisupervised learning, Benchmark testing BibRef

Huang, Z.[Zhen], Shen, X.[Xu], Xing, J.[Jun], Liu, T.L.[Tong-Liang], Tian, X.M.[Xin-Mei], Li, H.Q.[Hou-Qiang], Deng, B.[Bing], Huang, J.Q.[Jian-Qiang], Hua, X.S.[Xian-Sheng],
Revisiting Knowledge Distillation: An Inheritance and Exploration Framework,
CVPR21(3578-3587)
IEEE DOI 2111
Training, Learning systems, Knowledge engineering, Deep learning, Neural networks, Reinforcement learning BibRef

Chen, P.G.[Peng-Guang], Liu, S.[Shu], Zhao, H.S.[Heng-Shuang], Jia, J.Y.[Jia-Ya],
Distilling Knowledge via Knowledge Review,
CVPR21(5006-5015)
IEEE DOI 2111
Knowledge engineering, Object detection, Task analysis BibRef

Ji, M.[Mingi], Shin, S.J.[Seung-Jae], Hwang, S.H.[Seung-Hyun], Park, G.[Gibeom], Moon, I.C.[Il-Chul],
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation,
CVPR21(10659-10668)
IEEE DOI 2111
Knowledge engineering, Training, Codes, Semantics, Neural networks, Object detection BibRef

Salehi, M.[Mohammadreza], Sadjadi, N.[Niousha], Baselizadeh, S.[Soroosh], Rohban, M.H.[Mohammad H.], Rabiee, H.R.[Hamid R.],
Multiresolution Knowledge Distillation for Anomaly Detection,
CVPR21(14897-14907)
IEEE DOI 2111
Training, Location awareness, Knowledge engineering, Image resolution, Task analysis BibRef

Haselhoff, A.[Anselm], Kronenberger, J.[Jan], Küppers, F.[Fabian], Schneider, J.[Jonas],
Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation,
SAIAD21(21-28)
IEEE DOI 2109
Visualization, Shape, Semantics, Training data, Object detection, Predictive models, Linear programming BibRef

Yang, L.[Lehan], Xu, K.[Kele],
Cross Modality Knowledge Distillation for Multi-modal Aerial View Object Classification,
NTIRE21(382-387)
IEEE DOI 2109
Training, Speckle, Feature extraction, Radar polarimetry, Data models, Robustness BibRef

Bhat, P.[Prashant], Arani, E.[Elahe], Zonooz, B.[Bahram],
Distill on the Go: Online knowledge distillation in self-supervised learning,
LLID21(2672-2681)
IEEE DOI 2109
Annotations, Computer architecture, Performance gain, Benchmark testing BibRef

Okuno, T.[Tomoyuki], Nakata, Y.[Yohei], Ishii, Y.[Yasunori], Tsukizawa, S.[Sotaro],
Lossless AI: Toward Guaranteeing Consistency between Inferences Before and After Quantization via Knowledge Distillation,
MVA21(1-5)
DOI Link 2109
Training, Quality assurance, Quantization (signal), Object detection, Network architecture, Real-time systems BibRef

Nayak, G.K.[Gaurav Kumar], Mopuri, K.R.[Konda Reddy], Chakraborty, A.[Anirban],
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation,
WACV21(1429-1437)
IEEE DOI 2106
Training, Visualization, Sensitivity, Computational modeling, Semantics, Neural networks, Training data BibRef

Lee, J.[Jongmin], Jeong, Y.[Yoonwoo], Kim, S.[Seungwook], Min, J.[Juhong], Cho, M.[Minsu],
Learning to Distill Convolutional Features into Compact Local Descriptors,
WACV21(897-907)
IEEE DOI 2106
Location awareness, Visualization, Image matching, Semantics, Benchmark testing, Feature extraction, Robustness BibRef

Arani, E.[Elahe], Sarfraz, F.[Fahad], Zonooz, B.[Bahram],
Noise as a Resource for Learning in Knowledge Distillation,
WACV21(3128-3137)
IEEE DOI 2106
Training, Uncertainty, Neuroscience, Collaboration, Collaborative work, Brain modeling, Probabilistic logic BibRef

Chawla, A.[Akshay], Yin, H.X.[Hong-Xu], Molchanov, P.[Pavlo], Alvarez, J.[Jose],
Data-free Knowledge Distillation for Object Detection,
WACV21(3288-3297)
IEEE DOI 2106
Knowledge engineering, Training, Image synthesis, Neural networks, Object detection BibRef

Kothandaraman, D.[Divya], Nambiar, A.[Athira], Mittal, A.[Anurag],
Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation,
WACVW21(134-143) Autonomous Vehicle Vision
IEEE DOI 2105
Knowledge engineering, Adaptation models, Image segmentation, Semantics, Memory management BibRef

Kushawaha, R.K.[Ravi Kumar], Kumar, S.[Saurabh], Banerjee, B.[Biplab], Velmurugan, R.[Rajbabu],
Distilling Spikes: Knowledge Distillation in Spiking Neural Networks,
ICPR21(4536-4543)
IEEE DOI 2105
Knowledge engineering, Training, Image coding, Computational modeling, Artificial neural networks, Hardware BibRef

Sarfraz, F.[Fahad], Arani, E.[Elahe], Zonooz, B.[Bahram],
Knowledge Distillation Beyond Model Compression,
ICPR21(6136-6143)
IEEE DOI 2105
Training, Knowledge engineering, Neural networks, Network architecture, Collaborative work, Robustness BibRef

Ahmed, W.[Waqar], Zunino, A.[Andrea], Morerio, P.[Pietro], Murino, V.[Vittorio],
Compact CNN Structure Learning by Knowledge Distillation,
ICPR21(6554-6561)
IEEE DOI 2105
Training, Learning systems, Knowledge engineering, Network architecture, Predictive models BibRef

Ma, J.X.[Jia-Xin], Yonetani, R.[Ryo], Iqbal, Z.[Zahid],
Adaptive Distillation for Decentralized Learning from Heterogeneous Clients,
ICPR21(7486-7492)
IEEE DOI 2105
Learning systems, Adaptation models, Visualization, Biomedical equipment, Medical services, Collaborative work, Data models BibRef

Tsunashima, H.[Hideki], Kataoka, H.[Hirokatsu], Yamato, J.J.[Jun-Ji], Chen, Q.[Qiu], Morishima, S.[Shigeo],
Adversarial Knowledge Distillation for a Compact Generator,
ICPR21(10636-10643)
IEEE DOI 2105
Training, Image resolution, MIMICs, Generators BibRef

Kim, J.H.[Jang-Ho], Hyun, M.S.[Min-Sung], Chung, I.[Inseop], Kwak, N.[Nojun],
Feature Fusion for Online Mutual Knowledge Distillation,
ICPR21(4619-4625)
IEEE DOI 2105
Neural networks, Education, Performance gain BibRef

Mitsuno, K.[Kakeru], Nomura, Y.[Yuichiro], Kurita, T.[Takio],
Channel Planting for Deep Neural Networks using Knowledge Distillation,
ICPR21(7573-7579)
IEEE DOI 2105
Training, Knowledge engineering, Heuristic algorithms, Neural networks, Computer architecture, Network architecture BibRef

Finogeev, E., Gorbatsevich, V., Moiseenko, A., Vizilter, Y., Vygolov, O.,
Knowledge Distillation Using GANs for Fast Object Detection,
ISPRS20(B2:583-588).
DOI Link 2012
BibRef

Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J.,
Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation,
ICIP20(648-652)
IEEE DOI 2011
Perturbation methods, Task analysis, Training, Computational modeling, Approximation algorithms, black-box models BibRef

Xu, K.R.[Kun-Ran], Rui, L.[Lai], Li, Y.S.[Yi-Shi], Gu, L.[Lin],
Feature Normalized Knowledge Distillation for Image Classification,
ECCV20(XXV:664-680).
Springer DOI 2011
BibRef

Yang, Y., Qiu, J., Song, M., Tao, D., Wang, X.,
Distilling Knowledge From Graph Convolutional Networks,
CVPR20(7072-7081)
IEEE DOI 2008
Knowledge engineering, Task analysis, Computational modeling, Computer science, Training, Neural networks BibRef

Yun, J.S.[Ju-Seung], Kim, B.[Byungjoo], Kim, J.[Junmo],
Weight Decay Scheduling and Knowledge Distillation for Active Learning,
ECCV20(XXVI:431-447).
Springer DOI 2011
BibRef

Li, C.L.[Chang-Lin], Tang, T.[Tao], Wang, G.[Guangrun], Peng, J.F.[Jie-Feng], Wang, B.[Bing], Liang, X.D.[Xiao-Dan], Chang, X.J.[Xiao-Jun],
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search,
ICCV21(12261-12271)
IEEE DOI 2203
Training, Visualization, Correlation, Architecture, Computational modeling, Sociology, Computer architecture, Representation learning BibRef

Li, C.L.[Chang-Lin], Peng, J.F.[Jie-Feng], Yuan, L.C.[Liu-Chun], Wang, G.R.[Guang-Run], Liang, X.D.[Xiao-Dan], Lin, L.[Liang], Chang, X.J.[Xiao-Jun],
Block-Wisely Supervised Neural Architecture Search With Knowledge Distillation,
CVPR20(1986-1995)
IEEE DOI 2008
Computer architecture, Network architecture, Knowledge engineering, Training, DNA, Convergence, Feature extraction BibRef

Wei, L.H.[Long-Hui], Xiao, A.[An], Xie, L.X.[Ling-Xi], Zhang, X.P.[Xiao-Peng], Chen, X.[Xin], Tian, Q.[Qi],
Circumventing Outliers of Autoaugment with Knowledge Distillation,
ECCV20(III:608-625).
Springer DOI 2012
BibRef

Walawalkar, D.[Devesh], Shen, Z.Q.[Zhi-Qiang], Savvides, M.[Marios],
Online Ensemble Model Compression Using Knowledge Distillation,
ECCV20(XIX:18-35).
Springer DOI 2011
BibRef

Xiang, L.Y.[Liu-Yu], Ding, G.G.[Gui-Guang], Han, J.G.[Jun-Gong],
Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification,
ECCV20(V:247-263).
Springer DOI 2011
BibRef

Zhou, B.[Brady], Kalra, N.[Nimit], Krähenbühl, P.[Philipp],
Domain Adaptation Through Task Distillation,
ECCV20(XXVI:664-680).
Springer DOI 2011
BibRef

Li, Z.[Zheng], Huang, Y.[Ying], Chen, D.F.[De-Fang], Luo, T.R.[Tian-Ren], Cai, N.[Ning], Pan, Z.G.[Zhi-Geng],
Online Knowledge Distillation via Multi-branch Diversity Enhancement,
ACCV20(IV:318-333).
Springer DOI 2103
BibRef

Ye, H.J.[Han-Jia], Lu, S.[Su], Zhan, D.C.[De-Chuan],
Distilling Cross-Task Knowledge via Relationship Matching,
CVPR20(12393-12402)
IEEE DOI 2008
Task analysis, Neural networks, Training, Knowledge engineering, Predictive models, Stochastic processes, Temperature measurement BibRef

Yao, A.B.[An-Bang], Sun, D.W.[Da-Wei],
Knowledge Transfer via Dense Cross-layer Mutual-distillation,
ECCV20(XV:294-311).
Springer DOI 2011
BibRef

Yue, K.Y.[Kai-Yu], Deng, J.F.[Jiang-Fan], Zhou, F.[Feng],
Matching Guided Distillation,
ECCV20(XV:312-328).
Springer DOI 2011
BibRef

Wang, D.Y.[De-Yu], Wen, D.C.[Dong-Chao], Liu, J.J.[Jun-Jie], Tao, W.[Wei], Chen, T.W.[Tse-Wei], Osa, K.[Kinya], Kato, M.[Masami],
Fully Supervised and Guided Distillation for One-stage Detectors,
ACCV20(III:171-188).
Springer DOI 2103
BibRef

Itsumi, H., Beye, F., Shinohara, Y., Iwai, T.,
Training With Cache: Specializing Object Detectors From Live Streams Without Overfitting,
ICIP20(1976-1980)
IEEE DOI 2011
Training, Data models, Solid modeling, Adaptation models, Training data, Streaming media, Legged locomotion, Online training, Knowledge distillation BibRef

Liu, B.L.[Ben-Lin], Rao, Y.M.[Yong-Ming], Lu, J.W.[Ji-Wen], Zhou, J.[Jie], Hsieh, C.J.[Cho-Jui],
Metadistiller: Network Self-boosting via Meta-learned Top-down Distillation,
ECCV20(XIV:694-709).
Springer DOI 2011
BibRef

Choi, Y., Choi, J., El-Khamy, M., Lee, J.,
Data-Free Network Quantization With Adversarial Knowledge Distillation,
EDLCV20(3047-3057)
IEEE DOI 2008
Generators, Quantization (signal), Training, Computational modeling, Data models, Machine learning, Data privacy BibRef

de Vieilleville, F., Lagrange, A., Ruiloba, R., May, S.,
Towards Distillation of Deep Neural Networks for Satellite On-board Image Segmentation,
ISPRS20(B2:1553-1559).
DOI Link 2012
BibRef

Wang, X.B.[Xiao-Bo], Fu, T.Y.[Tian-Yu], Liao, S.C.[Sheng-Cai], Wang, S.[Shuo], Lei, Z.[Zhen], Mei, T.[Tao],
Exclusivity-Consistency Regularized Knowledge Distillation for Face Recognition,
ECCV20(XXIV:325-342).
Springer DOI 2012
BibRef

Guan, Y.S.[Yu-Shuo], Zhao, P.Y.[Peng-Yu], Wang, B.X.[Bing-Xuan], Zhang, Y.X.[Yuan-Xing], Yao, C.[Cong], Bian, K.G.[Kai-Gui], Tang, J.[Jian],
Differentiable Feature Aggregation Search for Knowledge Distillation,
ECCV20(XVII:469-484).
Springer DOI 2011
BibRef

Gu, J.D.[Jin-Dong], Wu, Z.L.[Zhi-Liang], Tresp, V.[Volker],
Introspective Learning by Distilling Knowledge from Online Self-explanation,
ACCV20(IV:36-52).
Springer DOI 2103
BibRef

Guo, Q.S.[Qiu-Shan], Wang, X.J.[Xin-Jiang], Wu, Y.C.[Yi-Chao], Yu, Z.P.[Zhi-Peng], Liang, D.[Ding], Hu, X.L.[Xiao-Lin], Luo, P.[Ping],
Online Knowledge Distillation via Collaborative Learning,
CVPR20(11017-11026)
IEEE DOI 2008
Knowledge engineering, Training, Collaborative work, Perturbation methods, Collaboration, Neural networks, Logic gates BibRef

Li, T., Li, J., Liu, Z., Zhang, C.,
Few Sample Knowledge Distillation for Efficient Network Compression,
CVPR20(14627-14635)
IEEE DOI 2008
Training, Tensile stress, Knowledge engineering, Convolution, Neural networks, Computational modeling, Standards BibRef

Farhadi, M.[Mohammad], Yang, Y.Z.[Ye-Zhou],
TKD: Temporal Knowledge Distillation for Active Perception,
WACV20(942-951)
IEEE DOI 2006
Code, Object Detection.
WWW Link. Temporal knowledge over NN applied over multiple frames. Adaptation models, Object detection, Visualization, Computational modeling, Task analysis, Training, Feature extraction BibRef

Seddik, M.E.A., Essafi, H., Benzine, A., Tamaazousti, M.,
Lightweight Neural Networks From PCA LDA Based Distilled Dense Neural Networks,
ICIP20(3060-3064)
IEEE DOI 2011
Neural networks, Principal component analysis, Computational modeling, Training, Machine learning, Lightweight Networks BibRef

Tung, F.[Fred], Mori, G.[Greg],
Similarity-Preserving Knowledge Distillation,
ICCV19(1365-1374)
IEEE DOI 2004
learning (artificial intelligence), neural nets, semantic networks, Task analysis BibRef

Zhang, M.Y.[Man-Yuan], Song, G.L.[Guang-Lu], Zhou, H.[Hang], Liu, Y.[Yu],
Discriminability Distillation in Group Representation Learning,
ECCV20(X:1-19).
Springer DOI 2011
BibRef

Jin, X.[Xiao], Peng, B.Y.[Bao-Yun], Wu, Y.C.[Yi-Chao], Liu, Y.[Yu], Liu, J.H.[Jia-Heng], Liang, D.[Ding], Yan, J.J.[Jun-Jie], Hu, X.L.[Xiao-Lin],
Knowledge Distillation via Route Constrained Optimization,
ICCV19(1345-1354)
IEEE DOI 2004
face recognition, image classification, learning (artificial intelligence), neural nets, optimisation, Neural networks BibRef

Mullapudi, R.T., Chen, S., Zhang, K., Ramanan, D., Fatahalian, K.,
Online Model Distillation for Efficient Video Inference,
ICCV19(3572-3581)
IEEE DOI 2004
convolutional neural nets, image segmentation, inference mechanisms, learning (artificial intelligence), Cameras BibRef

Peng, B., Jin, X., Li, D., Zhou, S., Wu, Y., Liu, J., Zhang, Z., Liu, Y.,
Correlation Congruence for Knowledge Distillation,
ICCV19(5006-5015)
IEEE DOI 2004
correlation methods, face recognition, image classification, learning (artificial intelligence), instance-level information, Knowledge transfer BibRef

Vongkulbhisal, J.[Jayakorn], Vinayavekhin, P.[Phongtharin], Visentini-Scarzanella, M.[Marco],
Unifying Heterogeneous Classifiers With Distillation,
CVPR19(3170-3179).
IEEE DOI 2002
BibRef

Yoshioka, K., Lee, E., Wong, S., Horowitz, M.,
Dataset Culling: Towards Efficient Training of Distillation-Based Domain Specific Models,
ICIP19(3237-3241)
IEEE DOI 1910
Object Detection, Training Efficiency, Distillation, Dataset Culling, Deep Learning BibRef

Kundu, J.N., Lakkakula, N., Radhakrishnan, V.B.,
UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation,
ICCV19(1436-1445)
IEEE DOI 2004
generalisation (artificial intelligence), image classification, object detection, unsupervised learning, task-transferability, Adaptation models BibRef

Park, W.[Wonpyo], Kim, D.J.[Dong-Ju], Lu, Y.[Yan], Cho, M.[Minsu],
Relational Knowledge Distillation,
CVPR19(3962-3971).
IEEE DOI 2002
BibRef

Liu, Y.F.[Yu-Fan], Cao, J.J.[Jia-Jiong], Li, B.[Bing], Yuan, C.F.[Chun-Feng], Hu, W.M.[Wei-Ming], Li, Y.X.[Yang-Xi], Duan, Y.Q.[Yun-Qiang],
Knowledge Distillation via Instance Relationship Graph,
CVPR19(7089-7097).
IEEE DOI 2002
BibRef

Ahn, S.S.[Sung-Soo], Hu, S.X.[Shell Xu], Damianou, A.[Andreas], Lawrence, N.D.[Neil D.], Dai, Z.W.[Zhen-Wen],
Variational Information Distillation for Knowledge Transfer,
CVPR19(9155-9163).
IEEE DOI 2002
BibRef

Minami, S.[Soma], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu],
Gradual Sampling Gate for Bidirectional Knowledge Distillation,
MVA19(1-6)
DOI Link 1911
Transfer knowledge from large pre-trained network to smaller one. data compression, learning (artificial intelligence), neural nets, gradual sampling gate, Power markets BibRef

Chen, W.C.[Wei-Chun], Chang, C.C.[Chia-Che], Lee, C.R.[Che-Rung],
Knowledge Distillation with Feature Maps for Image Classification,
ACCV18(III:200-215).
Springer DOI 1906
BibRef

Hou, S.H.[Sai-Hui], Pan, X.Y.[Xin-Yu], Loy, C.C.[Chen Change], Wang, Z.L.[Zi-Lei], Lin, D.H.[Da-Hua],
Lifelong Learning via Progressive Distillation and Retrospection,
ECCV18(III: 452-467).
Springer DOI 1810
BibRef

Pintea, S.L.[Silvia L.], Liu, Y.[Yue], van Gemert, J.C.[Jan C.],
Recurrent Knowledge Distillation,
ICIP18(3393-3397)
IEEE DOI 1809
small network learns from larger network. Computational modeling, Memory management, Training, Color, Convolution, Road transportation, Knowledge distillation, recurrent layers BibRef

Lee, S.H.[Seung Hyun], Kim, D.H.[Dae Ha], Song, B.C.[Byung Cheol],
Self-supervised Knowledge Distillation Using Singular Value Decomposition,
ECCV18(VI: 339-354).
Springer DOI 1810
BibRef

Yim, J., Joo, D., Bae, J., Kim, J.,
A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning,
CVPR17(7130-7138)
IEEE DOI 1711
Feature extraction, Knowledge engineering, Knowledge transfer, Optimization, Training BibRef

Gupta, S.[Saurabh], Hoffman, J.[Judy], Malik, J.[Jitendra],
Cross Modal Distillation for Supervision Transfer,
CVPR16(2827-2836)
IEEE DOI 1612
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Student-Teacher, Teacher-Student, Knowledge Distillation .


Last update:Nov 26, 2024 at 16:40:19