Yu, R.N.[Ruo-Nan],
Liu, S.H.[Song-Hua],
Wang, X.C.[Xin-Chao],
Dataset Distillation: A Comprehensive Review,
PAMI(46), No. 1, January 2024, pp. 150-170.
IEEE DOI
2312
Dataset condensation. Reduse to what matters.
BibRef
Lei, S.[Shiye],
Tao, D.C.[Da-Cheng],
A Comprehensive Survey of Dataset Distillation,
PAMI(46), No. 1, January 2024, pp. 17-32.
IEEE DOI
2312
BibRef
Jin, H.D.[Hyun-Dong],
Kim, E.[Eunwoo],
Dataset condensation with coarse-to-fine regularization,
PRL(188), 2025, pp. 178-184.
Elsevier DOI
2502
Dataset condensation, Representation learning
BibRef
Zhao, Z.H.[Zheng-Hao],
Shang, Y.Z.[Yu-Zhang],
Wu, J.[Junyi],
Yan, Y.[Yan],
Dataset Quantization with Active Learning Based Adaptive Sampling,
ECCV24(LX: 346-362).
Springer DOI
2412
BibRef
Zheng, H.Z.[Hai-Zhong],
Sun, J.C.[Jia-Chen],
Wu, S.[Shutong],
Kailkhura, B.[Bhavya],
Mao, Z.M.[Z. Morley],
Xiao, C.W.[Chao-Wei],
Prakash, A.[Atul],
Leveraging Hierarchical Feature Sharing for Efficient Dataset
Condensation,
ECCV24(XXIV: 166-182).
Springer DOI
2412
BibRef
Xu, Y.[Yue],
Li, Y.L.[Yong-Lu],
Cui, K.[Kaitong],
Wang, Z.Y.[Zi-Yu],
Lu, C.[Cewu],
Tai, Y.W.[Yu-Wing],
Tang, C.K.[Chi-Keung],
Distill Gold from Massive Ores: Bi-level Data Pruning Towards Efficient
Dataset Distillation,
ECCV24(XX: 240-257).
Springer DOI
2412
BibRef
Yu, R.N.[Ruo-Nan],
Liu, S.[Songhua],
Ye, J.W.[Jing-Wen],
Wang, X.C.[Xin-Chao],
Teddy: Efficient Large-scale Dataset Distillation via
Taylor-approximated Matching,
ECCV24(XLVI: 1-17).
Springer DOI
2412
BibRef
Yang, S.L.[Shao-Lei],
Cheng, S.[Shen],
Hong, M.B.[Ming-Bo],
Fan, H.Q.[Hao-Qiang],
Wei, X.[Xing],
Liu, S.C.[Shuai-Cheng],
Neural Spectral Decomposition for Dataset Distillation,
ECCV24(LII: 275-290).
Springer DOI
2412
BibRef
Son, B.[Byunggwan],
Oh, Y.[Youngmin],
Baek, D.[Donghyeon],
Ham, B.[Bumsub],
FYI: Flip Your Images for Dataset Distillation,
ECCV24(L: 214-230).
Springer DOI
2412
BibRef
Liu, D.[Dai],
Gu, J.D.[Jin-Dong],
Cao, H.[Hu],
Trinitis, C.[Carsten],
Schulz, M.[Martin],
Dataset Distillation by Automatic Training Trajectories,
ECCV24(LXXXVII: 334-351).
Springer DOI
2412
BibRef
Jia, Y.Q.[Yu-Qi],
Vahidian, S.[Saeed],
Sun, J.W.[Jing-Wei],
Zhang, J.[Jianyi],
Kungurtsev, V.[Vyacheslav],
Gong, N.Z.Q.[Neil Zhen-Qiang],
Chen, Y.[Yiran],
Unlocking the Potential of Federated Learning: The Symphony of Dataset
Distillation via Deep Generative Latents,
ECCV24(LXXVIII: 18-33).
Springer DOI
2412
BibRef
Ye, J.W.[Jing-Wen],
Yu, R.N.[Ruo-Nan],
Liu, S.[Songhua],
Wang, X.C.[Xin-Chao],
Distilled Datamodel with Reverse Gradient Matching,
CVPR24(11954-11963)
IEEE DOI
2410
Training, Computational modeling, Data integrity, Training data,
Reinforcement learning, Data models
BibRef
Deng, W.X.[Wen-Xiao],
Li, W.B.[Wen-Bin],
Ding, T.Y.[Tian-Yu],
Wang, L.[Lei],
Zhang, H.G.[Hong-Guang],
Huang, K.[Kuihua],
Huo, J.[Jing],
Gao, Y.[Yang],
Exploiting Inter-sample and Inter-feature Relations in Dataset
Distillation,
CVPR24(17057-17066)
IEEE DOI Code:
WWW Link.
2410
Training, Deep learning, Face recognition, Focusing,
Computer architecture, Computational efficiency, Inter-feature
BibRef
Zhu, D.Y.[Dong-Yao],
Fang, Y.B.[Yan-Bo],
Lei, B.[Bowen],
Xie, Y.Q.[Yi-Qun],
Xu, D.K.[Dong-Kuan],
Zhang, J.[Jie],
Zhang, R.[Ruqi],
Rethinking Data Distillation: Do Not Overlook Calibration,
ICCV23(4912-4922)
IEEE DOI
2401
BibRef
van Noord, N.[Nanne],
Prototype-based Dataset Comparison,
ICCV23(1944-1954)
IEEE DOI Code:
WWW Link.
2401
BibRef
Sajedi, A.[Ahmad],
Khaki, S.[Samir],
Amjadian, E.[Ehsan],
Liu, L.Z.[Lucy Z.],
Lawryshyn, Y.A.[Yuri A.],
Plataniotis, K.N.[Konstantinos N.],
DataDAM: Efficient Dataset Distillation with Attention Matching,
ICCV23(17051-17061)
IEEE DOI
2401
BibRef
Zhou, D.[Daquan],
Wang, K.[Kai],
Gu, J.Y.[Jian-Yang],
Peng, X.Y.[Xiang-Yu],
Lian, D.Z.[Dong-Ze],
Zhang, Y.F.[Yi-Fan],
You, Y.[Yang],
Feng, J.S.[Jia-Shi],
Dataset Quantization,
ICCV23(17159-17170)
IEEE DOI
2401
BibRef
Liu, Y.Q.[Yan-Qing],
Gu, J.Y.[Jian-Yang],
Wang, K.[Kai],
Zhu, Z.[Zheng],
Jiang, W.[Wei],
You, Y.[Yang],
DREAM: Efficient Dataset Distillation by Representative Matching,
ICCV23(17268-17278)
IEEE DOI
2401
BibRef
Liu, S.[Songhua],
Wang, X.C.[Xin-Chao],
Few-Shot Dataset Distillation via Translative Pre-Training,
ICCV23(18608-18618)
IEEE DOI
2401
BibRef
Mazumder, A.[Alokendu],
Baruah, T.[Tirthajit],
Singh, A.K.[Akash Kumar],
Murthy, P.K.[Pagadala Krishna],
Pattanaik, V.[Vishwajeet],
Rathore, P.[Punit],
DeepVAT: A Self-Supervised Technique for Cluster Assessment in Image
Datasets,
VIPriors23(187-195)
IEEE DOI
2401
BibRef
Zhang, L.[Lei],
Zhang, J.[Jie],
Lei, B.[Bowen],
Mukherjee, S.[Subhabrata],
Pan, X.[Xiang],
Zhao, B.[Bo],
Ding, C.[Caiwen],
Li, Y.[Yao],
Xu, D.[Dongkuan],
Accelerating Dataset Distillation via Model Augmentation,
CVPR23(11950-11959)
IEEE DOI
2309
smaller but efficient synthetic training datasets from large ones
BibRef
Cazenavette, G.[George],
Wang, T.Z.[Tong-Zhou],
Torralba, A.[Antonio],
Efros, A.A.[Alexei A.],
Zhu, J.Y.[Jun-Yan],
Generalizing Dataset Distillation via Deep Generative Prior,
CVPR23(3739-3748)
IEEE DOI
2309
BibRef
Wang, Z.J.[Zi-Jia],
Yang, W.B.[Wen-Bin],
Liu, Z.S.[Zhi-Song],
Chen, Q.[Qiang],
Ni, J.C.[Jia-Cheng],
Jia, Z.[Zhen],
Gift from Nature:
Potential Energy Minimization for Explainable Dataset Distillation,
MLCSA22(240-255).
Springer DOI
2307
BibRef
Cazenavette, G.[George],
Wang, T.Z.[Tong-Zhou],
Torralba, A.[Antonio],
Efros, A.A.[Alexei A.],
Zhu, J.Y.[Jun-Yan],
Dataset Distillation by Matching Training Trajectories,
CVPR22(10708-10717)
IEEE DOI
2210
BibRef
Earlier:
VDU22(4749-4758)
IEEE DOI
2210
Training, Visualization, Trajectory, Task analysis,
Unsupervised learning, Pattern matching,
Self- semi- meta- unsupervised learning
Training, Visualization, Trajectory, Task analysis, Pattern matching
BibRef
Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Fine Tuning, Fine-Tuning, Pre-Training, Zero-Shot, One-Shot .