Wu, X.[Xiang],
He, R.[Ran],
Hu, Y.[Yibo],
Sun, Z.N.[Zhe-Nan],
Learning an Evolutionary Embedding via Massive Knowledge Distillation,
IJCV(128), No. 8-9, September 2020, pp. 2089-2106.
Springer DOI
2008
transferring knowledge from a large powerful teacher network to a
small compact student one.
BibRef
Bae, J.H.[Ji-Hoon],
Yeo, D.[Doyeob],
Yim, J.[Junho],
Kim, N.S.[Nae-Soo],
Pyo, C.S.[Cheol-Sig],
Kim, J.[Junmo],
Densely Distilled Flow-Based Knowledge Transfer in Teacher-Student
Framework for Image Classification,
IP(29), 2020, pp. 5698-5710.
IEEE DOI
2005
BibRef
Earlier: A2, A1, A5, A3, A4, A6:
Sequential Knowledge Transfer in Teacher-Student Framework Using
Densely Distilled Flow-Based Information,
ICIP18(674-678)
IEEE DOI
1809
Knowledge transfer, Training, Computational modeling, Data mining,
Optimization, Image classification, Computer architecture,
residual network.
Training, Data mining, Optimization, Image classification,
Knowledge transfer, Computational modeling, Reliability,
BibRef
Zaras, A.[Adamantios],
Passalis, N.[Nikolaos],
Tefas, A.[Anastasios],
Improving knowledge distillation using unified ensembles of
specialized teachers,
PRL(146), 2021, pp. 215-221.
Elsevier DOI
2105
68T99, Knowledge distillation, Knowledge transfer,
Specialized teachers, Unified ensemble, Unified specialized teachers ensemble
BibRef
Zhang, K.[Kangkai],
Zhang, C.H.[Chun-Hui],
Li, S.[Shikun],
Zeng, D.[Dan],
Ge, S.M.[Shi-Ming],
Student Network Learning via Evolutionary Knowledge Distillation,
CirSysVideo(32), No. 4, April 2022, pp. 2251-2263.
IEEE DOI
2204
Training, Knowledge representation, Knowledge transfer,
Predictive models, Germanium, Data models, Data mining, deep learning
BibRef
Ge, S.M.[Shi-Ming],
Liu, B.C.[Bo-Chao],
Wang, P.J.[Peng-Ju],
Li, Y.[Yong],
Zeng, D.[Dan],
Learning Privacy-Preserving Student Networks via
Discriminative-Generative Distillation,
IP(32), 2023, pp. 116-127.
IEEE DOI
2301
Data models, Data privacy, Synthetic data, Training, Generators,
Knowledge engineering, Privacy, Differentially private learning,
knowledge distillation
BibRef
Wang, L.[Lin],
Yoon, K.J.[Kuk-Jin],
Knowledge Distillation and Student-Teacher Learning for Visual
Intelligence: A Review and New Outlooks,
PAMI(44), No. 6, June 2022, pp. 3048-3068.
IEEE DOI
2205
Training, Measurement, Computational modeling, Visualization,
Task analysis, Knowledge transfer, Speech recognition,
visual intelligence
BibRef
Wang, L.[Lin],
Chae, Y.J.[Yu-Jeong],
Yoon, S.H.[Sung-Hoon],
Kim, T.K.[Tae-Kyun],
Yoon, K.J.[Kuk-Jin],
EvDistill: Asynchronous Events to End-task Learning via Bidirectional
Reconstruction-guided Cross-modal Knowledge Distillation,
CVPR21(608-619)
IEEE DOI
2111
Training, Knowledge engineering, Semantics,
Dynamic range, Cameras, Data models
BibRef
Gou, J.P.[Jian-Ping],
Xiong, X.S.[Xiang-Shuo],
Yu, B.S.[Bao-Sheng],
Du, L.[Lan],
Zhan, Y.B.[Yi-Bing],
Tao, D.C.[Da-Cheng],
Multi-target Knowledge Distillation via Student Self-reflection,
IJCV(131), No. 7, July 2023, pp. 1857-1874.
Springer DOI
2307
BibRef
Borza, D.L.[Diana Laura],
Ileni, T.A.[Tudor Alexandru],
Marinescu, A.I.[Alexandru Ion],
Darabant, S.A.[Sergiu Adrian],
Teacher or supervisor? Effective online knowledge distillation via
guided collaborative learning,
CVIU(228), 2023, pp. 103632.
Elsevier DOI
2302
Knowledge distillation, Collaborative learning,
Online knowledge distillation, Model compression
BibRef
Yu, L.F.[Li-Fang],
Li, Y.W.[Yun-Wei],
Weng, S.W.[Shao-Wei],
Tian, H.[Huawei],
Liu, J.[Jing],
Adaptive multi-teacher softened relational knowledge distillation
framework for payload mismatch in image steganalysis,
JVCIR(95), 2023, pp. 103900.
Elsevier DOI
2309
Image steganalysis, PM (payload mismatch), BPDNets, AWA, SRKD
BibRef
Cao, Q.Z.[Qi-Zhi],
Zhang, K.B.[Kai-Bing],
He, X.[Xin],
Shen, J.[Junge],
Be an Excellent Student: Review, Preview, and Correction,
SPLetters(30), 2023, pp. 1722-1726.
IEEE DOI
2312
BibRef
Rao, J.[Jun],
Meng, X.[Xv],
Ding, L.[Liang],
Qi, S.H.[Shu-Han],
Liu, X.[Xuebo],
Zhang, M.[Min],
Tao, D.C.[Da-Cheng],
Parameter-Efficient and Student-Friendly Knowledge Distillation,
MultMed(26), 2024, pp. 4230-4241.
IEEE DOI
2403
Training, Smoothing methods, Knowledge transfer, Data models,
Adaptation models, Predictive models, Knowledge engineering,
image classification
BibRef
Xu, K.[Kai],
Wang, L.C.[Li-Chun],
Xin, J.[Jianjia],
Li, S.[Shuang],
Yin, B.C.[Bao-Cai],
Learning From Teacher's Failure:
A Reflective Learning Paradigm for Knowledge Distillation,
CirSysVideo(34), No. 1, January 2024, pp. 384-396.
IEEE DOI
2401
BibRef
Ye, X.[Xin],
Jiang, R.X.[Rong-Xin],
Tian, X.[Xiang],
Zhang, R.[Rui],
Chen, Y.W.[Yao-Wu],
Knowledge Distillation via Multi-Teacher Feature Ensemble,
SPLetters(31), 2024, pp. 566-570.
IEEE DOI
2402
Feature extraction, Optimization, Training, Image reconstruction,
Transforms, Semantics, Knowledge engineering, Feature ensemble,
knowledge distillation
BibRef
Li, Z.[Zheng],
Li, X.[Xiang],
Yang, L.F.[Ling-Feng],
Song, R.J.[Ren-Jie],
Yang, J.[Jian],
Pan, Z.[Zhigeng],
Dual teachers for self-knowledge distillation,
PR(151), 2024, pp. 110422.
Elsevier DOI
2404
Model compression, Image classification,
Self-knowledge distillation, Dual teachers
BibRef
Gou, J.P.[Jian-Ping],
Chen, Y.[Yu],
Yu, B.[Baosheng],
Liu, J.H.[Jin-Hua],
Du, L.[Lan],
Wan, S.H.[Shao-Hua],
Yi, Z.[Zhang],
Reciprocal Teacher-Student Learning via Forward and Feedback
Knowledge Distillation,
MultMed(26), 2024, pp. 7901-7916.
IEEE DOI
2405
Knowledge engineering, Training, Visualization, Computational modeling,
Reviews, Knowledge transfer, Correlation, visual recognition
BibRef
Yang, S.Z.[Shun-Zhi],
Yang, J.F.[Jin-Feng],
Zhou, M.C.[Meng-Chu],
Huang, Z.H.[Zhen-Hua],
Zheng, W.S.[Wei-Shi],
Yang, X.[Xiong],
Ren, J.[Jin],
Learning From Human Educational Wisdom: A Student-Centered Knowledge
Distillation Method,
PAMI(46), No. 6, June 2024, pp. 4188-4205.
IEEE DOI
2405
Knowledge engineering, PD control, PI control, Task analysis,
Knowledge transfer, Automation, Training, Curriculum learning,
student-centered
BibRef
Zhou, Q.[Quan],
Yu, B.[Bin],
Xiao, F.[Feng],
Ding, M.Y.[Ming-Yue],
Wang, Z.W.[Zhi-Wei],
Zhang, X.M.[Xu-Ming],
Robust Semi-Supervised 3D Medical Image Segmentation With Diverse
Joint-Task Learning and Decoupled Inter-Student Learning,
MedImg(43), No. 6, June 2024, pp. 2317-2331.
IEEE DOI Code:
WWW Link.
2406
Image segmentation, Task analysis, Training, Predictive models,
Synchronization, Electronics packaging,
multiple students
BibRef
Song, Y.C.[Yu-Cheng],
Wang, J.[Jincan],
Ge, Y.F.[Yi-Fan],
Li, L.F.[Li-Feng],
Guo, J.[Jia],
Dong, Q.X.[Quan-Xing],
Liao, Z.F.[Zhi-Fang],
Medical image classification: Knowledge transfer via residual U-Net
and vision transformer-based teacher-student model with knowledge
distillation,
JVCIR(102), 2024, pp. 104212.
Elsevier DOI
2407
Knowledge distillation, Medical imaging, U-Net, Residual module,
Attention, Vision Transformer
BibRef
Tang, J.L.[Jia-Liang],
Jiang, N.[Ning],
Zhu, H.Y.[Hong-Yuan],
Zhou, J.T.Y.[Joey Tian-Yi],
Gong, C.[Chen],
Learning Student Network Under Universal Label Noise,
IP(33), 2024, pp. 4363-4376.
IEEE DOI
2408
Noise measurement, Noise, Dogs, Knowledge engineering, Image coding,
Data models, Feature extraction, model compression
BibRef
Qian, L.[Liyin],
Zheng, K.W.[Kai-Wen],
Wang, L.[Luqi],
Li, S.[Sheng],
Student State-aware knowledge tracing based on attention mechanism: A
cognitive theory view,
PRL(184), 2024, pp. 190-196.
Elsevier DOI
2408
Knowledge tracing, Attention mechanism, Cognitive process,
Student performance prediction
BibRef
Wang, C.[Chao],
Tang, Z.[Zheng],
The Staged Knowledge Distillation in Video Classification:
Harmonizing Student Progress by a Complementary Weakly Supervised
Framework,
CirSysVideo(34), No. 8, August 2024, pp. 6646-6660.
IEEE DOI
2408
Distillation method and the structural design of the
teacher-student architecture.
Training, Uncertainty, Correlation, Generators, Data models,
Task analysis, Computational modeling, Knowledge distillation,
label-efficient learning
BibRef
Liu, Y.Z.[Yu-Zhen],
Dong, Q.L.[Qiu-Lei],
Descriptor Distillation: A Teacher-Student-Regularized Framework for
Learning Local Descriptors,
IJCV(132), No. 1, January 2024, pp. 3787-3805.
Springer DOI
2409
BibRef
Ling, J.[Jun],
Zhang, X.[Xuan],
Du, F.[Fei],
Li, L.[Linyu],
Shang, W.Y.[Wei-Yi],
Gao, C.[Chen],
Li, T.[Tong],
Patient teacher can impart locality to improve lightweight vision
transformer on small dataset,
PR(157), 2025, pp. 110893.
Elsevier DOI Code:
WWW Link.
2409
Vision transformer, Knowledge distillation,
Curriculum learning, Small dataset
BibRef
Ding, Y.F.[Yi-Feng],
Yang, G.[Gaoming],
Yin, S.T.[Shu-Ting],
Zhang, J.[Ji],
Fang, X.J.[Xian-Jin],
Yang, W.C.[Wen-Cheng],
Generous teacher: Good at distilling knowledge for student learning,
IVC(150), 2024, pp. 105199.
Elsevier DOI Code:
WWW Link.
2409
Knowledge distillation, Generous teacher,
Absorbing distilled knowledge, Decouple logit
BibRef
Zheng, Y.J.[Yu-Jie],
Wang, C.[Chong],
Tao, C.C.[Chen-Chen],
Lin, S.[Sunqi],
Qian, J.B.[Jiang-Bo],
Wu, J.[Jiafei],
Restructuring the Teacher and Student in Self-Distillation,
IP(33), 2024, pp. 5551-5563.
IEEE DOI Code:
WWW Link.
2410
Training, Image coding, Costs, Codes, Accuracy, Network architecture,
Transformers, Robustness, Topology, Calibration, mixup
BibRef
Dong, P.[Peijie],
Li, L.[Lujun],
Wei, Z.[Zimian],
DisWOT: Student Architecture Search for Distillation WithOut Training,
CVPR23(11898-11908)
IEEE DOI
2309
BibRef
Qian, C.[Chengyao],
Hayat, M.[Munawar],
Harandi, M.[Mehrtash],
Can we Distill Knowledge from Powerful Teachers Directly?,
ICIP23(595-599)
IEEE DOI
2312
BibRef
Jandial, S.[Surgan],
Khasbage, Y.[Yash],
Pal, A.[Arghya],
Balasubramanian, V.N.[Vineeth N.],
Krishnamurthy, B.[Balaji],
Distilling the Undistillable: Learning from a Nasty Teacher,
ECCV22(XIII:587-603).
Springer DOI
2211
BibRef
Li, L.[Lujun],
Self-Regulated Feature Learning via Teacher-free Feature Distillation,
ECCV22(XXVI:347-363).
Springer DOI
2211
BibRef
Zhao, S.[Shiji],
Yu, J.[Jie],
Sun, Z.L.[Zhen-Long],
Zhang, B.[Bo],
Wei, X.X.[Xing-Xing],
Enhanced Accuracy and Robustness via Multi-teacher Adversarial
Distillation,
ECCV22(IV:585-602).
Springer DOI
2211
BibRef
Beyer, L.[Lucas],
Zhai, X.H.[Xiao-Hua],
Royer, A.[Amélie],
Markeeva, L.[Larisa],
Anil, R.[Rohan],
Kolesnikov, A.[Alexander],
Knowledge distillation: A good teacher is patient and consistent,
CVPR22(10915-10924)
IEEE DOI
2210
Training, Manifolds, Schedules, Image coding, Computational modeling,
Data models, Deep learning architectures and techniques,
Representation learning
BibRef
Chen, D.F.[De-Fang],
Mei, J.P.[Jian-Ping],
Zhang, H.L.[Hai-Lin],
Wang, C.[Can],
Feng, Y.[Yan],
Chen, C.[Chun],
Knowledge Distillation with the Reused Teacher Classifier,
CVPR22(11923-11932)
IEEE DOI
2210
Costs, Computational modeling, Computer architecture,
Feature extraction, Pattern recognition,
Deep learning architectures and techniques
BibRef
Son, W.[Wonchul],
Na, J.[Jaemin],
Choi, J.Y.[Jun-Yong],
Hwang, W.J.[Won-Jun],
Densely Guided Knowledge Distillation using Multiple Teacher
Assistants,
ICCV21(9375-9384)
IEEE DOI
2203
Knowledge engineering, Training, Deep learning, Transfer learning,
Neural networks, Stochastic processes,
Recognition and classification
BibRef
Xu, Y.[Yi],
Pu, J.[Jian],
Zhao, H.[Hui],
Knowledge Distillation with a Precise Teacher and Prediction with
Abstention,
ICPR21(9000-9006)
IEEE DOI
2105
Knowledge engineering, Supervised learning, Benchmark testing,
Predictive models
BibRef
Zhu, Y.C.[Yi-Chen],
Wang, Y.[Yi],
Student Customized Knowledge Distillation:
Bridging the Gap Between Student and Teacher,
ICCV21(5037-5046)
IEEE DOI
2203
Knowledge engineering, Training, Visualization, Image segmentation,
Semantics, Object detection,
BibRef
Zi, B.[Bojia],
Zhao, S.H.[Shi-Hao],
Ma, X.J.[Xing-Jun],
Jiang, Y.G.[Yu-Gang],
Revisiting Adversarial Robustness Distillation:
Robust Soft Labels Make Student Better,
ICCV21(16423-16432)
IEEE DOI
2203
Training, Deep learning, Codes, Computational modeling,
Neural networks, Predictive models, Adversarial learning,
Recognition and classification
BibRef
Zhang, Z.X.[Zhe-Xi],
Zhu, W.[Wei],
Yan, J.C.[Jun-Chi],
Gao, P.[Peng],
Xie, G.T.[Guo-Tong],
Automatic Student Network Search for Knowledge Distillation,
ICPR21(2446-2453)
IEEE DOI
2105
Knowledge engineering, Performance evaluation,
Computational modeling, Bit error rate, Neural networks,
Natural language processing
BibRef
Zhang, Y.C.[You-Cai],
Lan, Z.H.[Zhong-Hao],
Dai, Y.C.[Yu-Chen],
Zeng, F.G.[Fan-Gao],
Bai, Y.[Yan],
Chang, J.[Jie],
Wei, Y.C.[Yi-Chen],
Prime-aware Adaptive Distillation,
ECCV20(XIX:658-674).
Springer DOI
2011
Student-Teacher learning.
BibRef
Xu, G.D.[Guo-Dong],
Liu, Z.W.[Zi-Wei],
Li, X.X.[Xiao-Xiao],
Loy, C.C.[Chen Change],
Knowledge Distillation Meets Self-Supervision,
ECCV20(IX:588-604).
Springer DOI
2011
Extracting the dark knowledge from a teacher network to guide
the learning of a student network, for transfer learning.
BibRef
Li, X.J.[Xiao-Jie],
Wu, J.L.[Jian-Long],
Fang, H.Y.[Hong-Yu],
Liao, Y.[Yue],
Wang, F.[Fei],
Qian, C.[Chen],
Local Correlation Consistency for Knowledge Distillation,
ECCV20(XII: 18-33).
Springer DOI
2010
Knowledge extraction from the teacher network plays a critical role in
the knowledge distillation task to improve the performance of the
student network.
BibRef
Passalis, N.[Nikolaos],
Tzelepi, M.[Maria],
Tefas, A.[Anastasios],
Heterogeneous Knowledge Distillation Using Information Flow Modeling,
CVPR20(2336-2345)
IEEE DOI
2008
From complex teacher to smaller student.
Training, Neural networks, Knowledge engineering, Data models,
Convergence, Data mining, Transforms
BibRef
Chen, Z.L.[Zai-Liang],
Zheng, X.X.[Xian-Xian],
Shen, H.L.[Hai-Lan],
Zeng, Z.Y.[Zi-Yang],
Zhou, Y.K.[Yu-Kun],
Zhao, R.C.[Rong-Chang],
Improving Knowledge Distillation via Category Structure,
ECCV20(XXVIII:205-219).
Springer DOI
2011
Training student to mimic the teacher, but not capture the structure.
BibRef
Wang, D.,
Li, Y.,
Wang, L.,
Gong, B.,
Neural Networks Are More Productive Teachers Than Human Raters:
Active Mixup for Data-Efficient Knowledge Distillation From a
Blackbox Model,
CVPR20(1495-1504)
IEEE DOI
2008
Neural networks, Computational modeling, Data models, Training,
Knowledge engineering, Visualization, Manifolds
BibRef
Cho, J.H.,
Hariharan, B.,
On the Efficacy of Knowledge Distillation,
ICCV19(4793-4801)
IEEE DOI
2004
learning (artificial intelligence), neural nets, Probability distribution,
teacher architectures, knowledge distillation performance.
BibRef
Zhang, L.,
Song, J.,
Gao, A.,
Chen, J.,
Bao, C.,
Ma, K.,
Be Your Own Teacher: Improve the Performance of Convolutional Neural
Networks via Self Distillation,
ICCV19(3712-3721)
IEEE DOI
2004
convolutional neural nets, learning (artificial intelligence),
knowledge distillation, student neural networks,
Computational modeling
BibRef
Yang, C.L.[Cheng-Lin],
Xie, L.X.[Ling-Xi],
Su, C.[Chi],
Yuille, A.L.[Alan L.],
Snapshot Distillation: Teacher-Student Optimization in One Generation,
CVPR19(2854-2863).
IEEE DOI
2002
BibRef
Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Explainable Aritficial Intelligence .