13.6.4.3 Self-Supervised Knowledge Distillation

Chapter Contents (Back)
Knowledge Distillation. Self-Supervised. Distillation. Knowledge-Based Vision.
See also Knowledge Distillation.

Mazumder, P.[Pratik], Singh, P.[Pravendra], Namboodiri, V.P.[Vinay P.],
GIFSL: Grafting based improved few-shot learning,
IVC(104), 2020, pp. 104006.
Elsevier DOI 2012
Few-shot learning, Grafting, Self-supervision, Distillation, Deep learning, Object recognition BibRef

Yue, J.[Jun], Fang, L.Y.[Le-Yuan], Rahmani, H.[Hossein], Ghamisi, P.[Pedram],
Self-Supervised Learning With Adaptive Distillation for Hyperspectral Image Classification,
GeoRS(60), 2022, pp. 1-13.
IEEE DOI 2112
Feature extraction, Training, Adaptive systems, Mirrors, Knowledge engineering, Hyperspectral imaging, Spectral analysis, spatial-spectral feature extraction BibRef

Ding, K.[Kexin], Lu, T.[Ting], Fu, W.[Wei], Fang, L.Y.[Le-Yuan],
Cross-Scene Hyperspectral Image Classification With Consistency-Aware Customized Learning,
CirSysVideo(35), No. 1, January 2025, pp. 418-430.
IEEE DOI 2502
Feature extraction, Training, Prototypes, Representation learning, Data mining, Laser radar, Hyperspectral imaging, contrastive learning BibRef

Wang, S.L.[Shu-Ling], Hu, M.[Mu], Li, B.[Bin], Gong, X.J.[Xiao-Jin],
Self-Paced Knowledge Distillation for Real-Time Image Guided Depth Completion,
SPLetters(29), No. 2022, pp. 867-871.
IEEE DOI 2204
Knowledge engineering, Predictive models, Training, Task analysis, Real-time systems, Color, Loss measurement, self-paced learning BibRef

Zhou, H.[Haonan], Du, X.P.[Xiao-Ping], Li, S.[Sen],
Self-Supervision and Self-Distillation with Multilayer Feature Contrast for Supervision Collapse in Few-Shot Remote Sensing Scene Classification,
RS(14), No. 13, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Chi, Q.[Qiang], Lv, G.H.[Guo-Hua], Zhao, G.X.[Gui-Xin], Dong, X.J.[Xiang-Jun],
A Novel Knowledge Distillation Method for Self-Supervised Hyperspectral Image Classification,
RS(14), No. 18, 2022, pp. xx-yy.
DOI Link 2209
BibRef

Zhao, Y.[Yibo], Liu, J.J.[Jian-Jun], Yang, J.L.[Jin-Long], Wu, Z.B.[Ze-Bin],
Remote Sensing Image Scene Classification via Self-Supervised Learning and Knowledge Distillation,
RS(14), No. 19, 2022, pp. xx-yy.
DOI Link 2210
BibRef

Tang, Y.[Yuan], Chen, Y.[Ying], Xie, L.[Linbo],
Self-knowledge distillation based on knowledge transfer from soft to hard examples,
IVC(135), 2023, pp. 104700.
Elsevier DOI 2306
Model compression, Self-knowledge distillation, Hard examples, Class probability consistency, Memory bank BibRef

Lee, H.[Hyoje], Park, Y.[Yeachan], Seo, H.[Hyun], Kang, M.[Myungjoo],
Self-knowledge distillation via dropout,
CVIU(233), 2023, pp. 103720.
Elsevier DOI 2307
Deep learning, Knowledge distillation, Self-knowledge distillation, Regularization, Dropout BibRef

Yu, X.T.[Xiao-Tong], Sun, S.[Shiding], Tian, Y.J.[Ying-Jie],
Self-distillation and self-supervision for partial label learning,
PR(146), 2024, pp. 110016.
Elsevier DOI 2311
Knowledge distillation, Self-supervised learning, Partial label learning, Machine learning BibRef

Wang, J.H.[Jun-Huang], Zhang, W.W.[Wei-Wei], Guo, Y.F.[Yu-Feng], Liang, P.[Peng], Ji, M.[Ming], Zhen, C.H.[Cheng-Hui], Wang, H.[Hanmeng],
Global key knowledge distillation framework,
CVIU(239), 2024, pp. 103902.
Elsevier DOI 2402
Deep learning, Knowledge distillation, Self-distillation, Convolutional neural network BibRef

Yu, H.[Hao], Feng, X.[Xin], Wang, Y.L.[Yun-Long],
Enhancing deep feature representation in self-knowledge distillation via pyramid feature refinement,
PRL(178), 2024, pp. 35-42.
Elsevier DOI Code:
WWW Link. 2402
Self-knowledge distillation, Feature representation, Pyramid structure, Deep neural networks BibRef

Li, S.Y.[Shu-Yi], Hu, H.C.[Hong-Chao], Huo, S.[Shumin], Liang, H.[Hao],
Clean, performance-robust, and performance-sensitive historical information based adversarial self-distillation,
IET-CV(18), No. 5, 2024, pp. 591-612.
DOI Link 2408
architecture, convolutional neural nets, image classification, image sampling, image sequences BibRef

Sun, N.[Ning], Xu, W.[Wei], Liu, J.X.[Ji-Xin], Chai, L.[Lei], Sun, H.[Haian],
The Multimodal Scene Recognition Method Based on Self-Attention and Distillation,
MultMedMag(31), No. 4, October 2024, pp. 25-36.
IEEE DOI 2501
Feature extraction, Training, Image recognition, Transformers, Layout, Convolutional neural networks, Sun BibRef

Lai, Y.T.[Yu-Tong], Ning, D.J.[De-Jun], Liu, S.P.[Shi-Peng],
KED: A Deep-Supervised Knowledge Enhancement Self-Distillation Framework for Model Compression,
SPLetters(32), 2025, pp. 831-835.
IEEE DOI 2503
Training, Computational modeling, Knowledge engineering, Feature extraction, Accuracy, Focusing, Data models, Data mining, model compression BibRef

Li, C.L.[Chang-Lin], Lin, S.[Sihao], Tang, T.[Tao], Wang, G.[Guangrun], Li, M.J.[Ming-Jie], Liang, X.D.[Xiao-Dan], Chang, X.J.[Xiao-Jun],
BossNAS Family: Block-Wisely Self-Supervised Neural Architecture Search,
PAMI(47), No. 5, May 2025, pp. 3500-3514.
IEEE DOI 2504
Transformers, Training, Computational modeling, Accuracy, Visualization, Correlation, unsupervised NAS BibRef

Li, C.L.[Chang-Lin], Peng, J.F.[Jie-Feng], Yuan, L.C.[Liu-Chun], Wang, G.R.[Guang-Run], Liang, X.D.[Xiao-Dan], Lin, L.[Liang], Chang, X.J.[Xiao-Jun],
Block-Wisely Supervised Neural Architecture Search With Knowledge Distillation,
CVPR20(1986-1995)
IEEE DOI 2008
Network architecture, Knowledge engineering, Training, DNA, Convergence, Feature extraction BibRef

Li, C.L.[Chang-Lin], Tang, T.[Tao], Wang, G.[Guangrun], Peng, J.F.[Jie-Feng], Wang, B.[Bing], Liang, X.D.[Xiao-Dan], Chang, X.J.[Xiao-Jun],
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search,
ICCV21(12261-12271)
IEEE DOI 2203
Training, Visualization, Correlation, Architecture, Computational modeling, Sociology, Representation learning BibRef

Yang, Y.[Yang], Wang, C.[Chao], Gong, L.[Lei], Wu, M.[Min], Chen, Z.H.[Zheng-Hua], Gao, Y.X.[Ying-Xue], Wang, T.[Teng], Zhou, X.[Xuehai],
Uncertainty-Aware Self-Knowledge Distillation,
CirSysVideo(35), No. 5, May 2025, pp. 4464-4478.
IEEE DOI 2505
Uncertainty, Calibration, Accuracy, Vectors, Training, Predictive models, Smoothing methods, Artificial neural networks, image recognition BibRef

Zhang, W.W.[Wei-Wei], Liang, P.[Peng], Zhu, J.Q.[Jian-Qing], Wang, J.H.[Jun-Huang],
Contrastive Deep Supervision Meets self-knowledge distillation,
JVCIR(110), 2025, pp. 104470.
Elsevier DOI 2506
Deep supervision, Contrastive learning, Self-knowledge distillation, Image classification BibRef


Wang, Y.Z.[Yu-Zheng], Chen, Z.Y.[Zhao-Yu], Yang, D.K.[Ding-Kang], Sun, Y.Q.[Yun-Quan], Qi, L.[Lizhe],
Self-cooperation Knowledge Distillation for Novel Class Discovery,
ECCV24(LXX: 459-476).
Springer DOI 2412
BibRef

Han, K.[Keonhee], Muhle, D.[Dominik], Wimbauer, F.[Felix], Cremers, D.[Daniel],
Boosting Self-Supervision for Single-View Scene Completion via Knowledge Distillation,
CVPR24(9837-9847)
IEEE DOI 2410
Geometry, Solid modeling, Fuses, Computational modeling, Estimation, Single-View-Reconstruction, Depth Estimation BibRef

Lebailly, T.[Tim], Stegmüller, T.[Thomas], Bozorgtabar, B.[Behzad], Thiran, J.P.[Jean-Philippe], Tuytelaars, T.[Tinne],
Adaptive Similarity Bootstrapping for Self-Distillation based Representation Learning,
ICCV23(16459-16468)
IEEE DOI Code:
WWW Link. 2401
BibRef

Yang, Z.D.[Zhen-Dong], Zeng, A.L.[Ai-Ling], Li, Z.[Zhe], Zhang, T.[Tianke], Yuan, C.[Chun], Li, Y.[Yu],
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels,
ICCV23(17139-17148)
IEEE DOI Code:
WWW Link. 2401
BibRef

Sasaya, T.[Tenta], Watanabe, T.[Takashi], Ida, T.[Takashi], Ono, T.[Toshiyuki],
Simple Self-Distillation Learning for Noisy Image Classification,
ICIP23(795-799)
IEEE DOI 2312
BibRef

Song, K.[Kaiyou], Zhang, S.[Shan], Luo, Z.[Zimeng], Wang, T.[Tong], Xie, J.[Jin],
Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning,
ICCV23(16053-16062)
IEEE DOI 2401
BibRef

Song, K.[Kaiyou], Xie, J.[Jin], Zhang, S.[Shan], Luo, Z.[Zimeng],
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning,
CVPR23(11848-11857)
IEEE DOI 2309
BibRef

Lv, Y.[Yuan], Xu, Y.J.[Ya-Jing], Wang, S.[Shusen], Ma, Y.J.[Ying-Jian], Wang, D.[Dengke],
Continuous Self-Study: Scene Graph Generation with Self-Knowledge Distillation and Spatial Augmentation,
ACCV22(V:297-315).
Springer DOI 2307
BibRef

Lebailly, T.[Tim], Tuytelaars, T.[Tinne],
Global-Local Self-Distillation for Visual Representation Learning,
WACV23(1441-1450)
IEEE DOI 2302
Training, Representation learning, Visualization, Codes, Coherence, Task analysis, Algorithms: Machine learning architectures, and algorithms (including transfer) BibRef

Chen, W.C.[Wei-Chi], Chu, W.T.[Wei-Ta],
SSSD: Self-Supervised Self Distillation,
WACV23(2769-2776)
IEEE DOI 2302
Visualization, Computational modeling, Clustering algorithms, Self-supervised learning, Feature extraction, Data models, visual reasoning BibRef

Mu, M.[Michael], Bhattacharjee, S.D.[Sreyasee Das], Yuan, J.S.[Jun-Song],
Self-Supervised Distilled Learning for Multi-modal Misinformation Identification,
WACV23(2818-2827)
IEEE DOI 2302
Representation learning, Training data, Predictive models, Streaming media, Semisupervised learning, Multitasking, Vision + language and/or other modalities BibRef

Jang, J.[Jiho], Kim, S.[Seonhoon], Yoo, K.[Kiyoon], Kong, C.[Chaerin], Kim, J.[Jangho], Kwak, N.[Nojun],
Self-Distilled Self-supervised Representation Learning,
WACV23(2828-2838)
IEEE DOI 2302
Representation learning, Protocols, Codes, Statistical analysis, Self-supervised learning, Transformers, and algorithms (including transfer) BibRef

Tzelepi, M.[Maria], Symeonidis, C.[Charalampos], Nikolaidis, N.[Nikos], Tefas, A.[Anastasios],
Multilayer Online Self-Acquired Knowledge Distillation,
ICPR22(4822-4828)
IEEE DOI 2212
Training, Computational modeling, Pipelines, Estimation, Nonhomogeneous media, Probability distribution BibRef

Xu, Y.F.[Yi-Fan], Shamsolmoali, P.[Pourya], Granger, E.[Eric], Nicodeme, C.[Claire], Gardes, L.[Laurent], Yang, J.[Jie],
TransVLAD: Multi-Scale Attention-Based Global Descriptors for Visual Geo-Localization,
WACV23(2839-2848)
IEEE DOI 2302
Visualization, Codes, Computational modeling, Image retrieval, Self-supervised learning, Transformers, and un-supervised learning) BibRef

Xu, Y.F.[Yi-Fan], Shamsolmoali, P.[Pourya], Yang, J.[Jie],
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation,
ICPR22(1815-1821)
IEEE DOI 2212
Knowledge engineering, Training, Visualization, Image matching, Image retrieval, Lighting, Benchmark testing BibRef

Baek, K.[Kyungjune], Lee, S.[Seungho], Shim, H.J.[Hyun-Jung],
Learning from Better Supervision: Self-distillation for Learning with Noisy Labels,
ICPR22(1829-1835)
IEEE DOI 2212
Training, Deep learning, Filtering, Neural networks, Predictive models, Data collection, Benchmark testing BibRef

Yang, Z.[Zhou], Dong, W.S.[Wei-Sheng], Li, X.[Xin], Wu, J.J.[Jin-Jian], Li, L.[Leida], Shi, G.M.[Guang-Ming],
Self-Feature Distillation with Uncertainty Modeling for Degraded Image Recognition,
ECCV22(XXIV:552-569).
Springer DOI 2211
BibRef

Yang, C.G.[Chuan-Guang], An, Z.[Zhulin], Zhou, H.[Helong], Cai, L.H.[Lin-Hang], Zhi, X.[Xiang], Wu, J.W.[Ji-Wen], Xu, Y.J.[Yong-Jun], Zhang, Q.[Qian],
MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition,
ECCV22(XXIV:534-551).
Springer DOI 2211
BibRef

Gao, Y.T.[Yu-Ting], Zhuang, J.X.[Jia-Xin], Lin, S.H.[Shao-Hui], Cheng, H.[Hao], Sun, X.[Xing], Li, K.[Ke], Shen, C.H.[Chun-Hua],
DisCo: Remedying Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning,
ECCV22(XXVI:237-253).
Springer DOI 2211
BibRef

Liu, H.[Hao], Ye, M.[Mang],
Improving Self-supervised Lightweight Model Learning via Hard-Aware Metric Distillation,
ECCV22(XXXI:295-311).
Springer DOI 2211
BibRef

Liang, J.J.[Jia-Jun], Li, L.[Linze], Bing, Z.D.[Zhao-Dong], Zhao, B.R.[Bo-Rui], Tang, Y.[Yao], Lin, B.[Bo], Fan, H.Q.[Hao-Qiang],
Efficient One Pass Self-distillation with Zipf's Label Smoothing,
ECCV22(XI:104-119).
Springer DOI 2211
BibRef

Shen, Y.Q.[Yi-Qing], Xu, L.[Liwu], Yang, Y.Z.[Yu-Zhe], Li, Y.Q.[Ya-Qian], Guo, Y.D.[Yan-Dong],
Self-Distillation from the Last Mini-Batch for Consistency Regularization,
CVPR22(11933-11942)
IEEE DOI 2210
Training, Codes, Computer network reliability, Memory management, Network architecture, Benchmark testing, Machine learning BibRef

Ji, M.[Mingi], Shin, S.J.[Seung-Jae], Hwang, S.H.[Seung-Hyun], Park, G.[Gibeom], Moon, I.C.[Il-Chul],
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation,
CVPR21(10659-10668)
IEEE DOI 2111
Knowledge engineering, Training, Codes, Semantics, Neural networks, Object detection BibRef

Tejankar, A.[Ajinkya], Koohpayegani, S.A.[Soroush Abbasi], Pillai, V.[Vipin], Favaro, P.[Paolo], Pirsiavash, H.[Hamed],
ISD: Self-Supervised Learning by Iterative Similarity Distillation,
ICCV21(9589-9598)
IEEE DOI 2203
Codes, Transfer learning, Iterative methods, Task analysis, Standards, Representation learning, Transfer/Low-shot/Semi/Unsupervised Learning BibRef

Zheng, Z.Z.[Zhen-Zhu], Peng, X.[Xi],
Self-Guidance: Improve Deep Neural Network Generalization via Knowledge Distillation,
WACV22(3451-3460)
IEEE DOI 2202
Training, Deep learning, Knowledge engineering, Measurement, Visualization, Image recognition, Neural networks, Learning and Optimization BibRef

Bhat, P.[Prashant], Arani, E.[Elahe], Zonooz, B.[Bahram],
Distill on the Go: Online knowledge distillation in self-supervised learning,
LLID21(2672-2681)
IEEE DOI 2109
Annotations, Performance gain, Benchmark testing BibRef

Xiang, L.Y.[Liu-Yu], Ding, G.G.[Gui-Guang], Han, J.G.[Jun-Gong],
Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification,
ECCV20(V:247-263).
Springer DOI 2011
BibRef

Liu, B.L.[Ben-Lin], Rao, Y.M.[Yong-Ming], Lu, J.W.[Ji-Wen], Zhou, J.[Jie], Hsieh, C.J.[Cho-Jui],
Metadistiller: Network Self-boosting via Meta-learned Top-down Distillation,
ECCV20(XIV:694-709).
Springer DOI 2011
BibRef

Lee, S.H.[Seung Hyun], Kim, D.H.[Dae Ha], Song, B.C.[Byung Cheol],
Self-supervised Knowledge Distillation Using Singular Value Decomposition,
ECCV18(VI: 339-354).
Springer DOI 1810
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Dataset Distillation, Dataset Summary, Dataset Quantization .


Last update:Sep 10, 2025 at 12:00:25