14.1.7 Pre-Training

Chapter Contents (Back)
Pre-Training.
See also Transfer Learning from Other Tasks, Other Classes.
See also Domain Generalization.
See also CLIP, Contrastive Language-Image Pre-Training.
See also Fine Tuning, Fine-Tuning, Pre-Training, Zero-Shot, One-Shot.

Wang, J.[Jie], Luo, C.[Chang], Huang, H.Q.[Han-Qiao], Zhao, H.Z.[Hui-Zhen], Wang, S.Q.[Shi-Qiang],
Transferring Pre-Trained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network,
RS(9), No. 3, 2017, pp. xx-yy.
DOI Link 1704
BibRef

Wen, Y.[Yang], Chen, L.T.[Lei-Ting], Deng, Y.[Yu], Zhou, C.[Chuan],
Rethinking pre-training on medical imaging,
JVCIR(78), 2021, pp. 103145.
Elsevier DOI 2107
Transfer learning, Medical image analysis, Convolutional neural network, Survival prediction BibRef

Zhang, T.[Tong], Gao, P.[Peng], Dong, H.[Hao], Zhuang, Y.[Yin], Wang, G.Q.[Guan-Qun], Zhang, W.[Wei], Chen, H.[He],
Consecutive Pre-Training: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing Domain,
RS(14), No. 22, 2022, pp. xx-yy.
DOI Link 2212
BibRef

Kataoka, H.[Hirokatsu], Okayasu, K.[Kazushige], Matsumoto, A.[Asato], Yamagata, E.[Eisuke], Yamada, R.[Ryosuke], Inoue, N.[Nakamasa], Nakamura, A.[Akio], Satoh, Y.[Yutaka],
Pre-Training Without Natural Images,
IJCV(130), No. 1, January 2022, pp. 990-1007.
Springer DOI 2204
BibRef
Earlier: ACCV20(VI:583-600).
Springer DOI 2103
BibRef

Xu, C.[Cong], Li, D.[Dan], Yang, M.[Min],
Adversarial momentum-contrastive pre-training,
PRL(160), 2022, pp. 172-179.
Elsevier DOI 2208
Real samples and adversarial samples for training. Adversarial robustness, Contrastive learning, Memory bank, Fine-tuning BibRef

Zhou, H.Y.[Hong-Yu], Lu, C.X.[Chi-Xiang], Chen, C.Q.[Chao-Qi], Yang, S.[Sibei], Yu, Y.Z.[Yi-Zhou],
A Unified Visual Information Preservation Framework for Self-supervised Pre-Training in Medical Image Analysis,
PAMI(45), No. 7, July 2023, pp. 8020-8035.
IEEE DOI 2306
Semantics, Image restoration, Task analysis, Visualization, Medical diagnostic imaging, Image segmentation, transfer learning BibRef

Chen, Z.H.[Zi-Han], Zhu, H.Y.[Hong-Yuan], Cheng, H.[Hao], Mi, S.[Siya], Zhang, Y.[Yu], Geng, X.[Xin],
LPCL: Localized prominence contrastive learning for self-supervised dense visual pre-training,
PR(135), 2023, pp. 109185.
Elsevier DOI 2212
Self-supervised learning, Contrastive learning, Dense representation BibRef

Wei, L.H.[Long-Hui], Xie, L.X.[Ling-Xi], Zhou, W.G.[Wen-Gang], Li, H.Q.[Hou-Qiang], Tian, Q.[Qi],
Exploring the diversity and invariance in yourself for visual pre-training task,
PR(139), 2023, pp. 109437.
Elsevier DOI 2304
Visual pre-training, Self-supervised learning, Multi-grained visual information BibRef

Peng, J.[Junran], Chang, Q.[Qing], Yin, H.R.[Hao-Ran], Bu, X.Y.[Xing-Yuan], Sun, J.J.[Jia-Jun], Xie, L.X.[Ling-Xi], Zhang, X.P.[Xiao-Peng], Tian, Q.[Qi], Zhang, Z.X.[Zhao-Xiang],
GAIA-Universe: Everything is Super-Netify,
PAMI(45), No. 10, October 2023, pp. 11856-11868.
IEEE DOI 2310

WWW Link. BibRef

Dong, X.N.[Xing-Ning], Guo, Q.P.[Qing-Pei], Gan, T.[Tian], Wang, Q.[Qing], Wu, J.L.[Jian-Long], Ren, X.Y.[Xiang-Yuan], Cheng, Y.[Yuan], Chu, W.[Wei],
SNP-S3: Shared Network Pre-Training and Significant Semantic Strengthening for Various Video-Text Tasks,
CirSysVideo(34), No. 4, April 2024, pp. 2525-2535.
IEEE DOI Code:
WWW Link. 2404
Task analysis, Visualization, Feature extraction, Semantics, Training, Transformers, Video-text pre-training, video-text matching BibRef

Zhao, T.C.[Tian-Cheng], Liu, P.[Peng], Lee, K.[Kyusong],
OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection network,
IET-CV(18), No. 5, 2024, pp. 626-639.
DOI Link 2408
object detection, object recognition BibRef

Tang, Y.[Yuan], Li, X.Z.[Xian-Zhi], Xu, J.F.[Jin-Feng], Yu, Q.[Qiao], Hu, L.[Long], Hao, Y.X.[Yi-Xue], Chen, M.[Min],
Point-LGMask: Local and Global Contexts Embedding for Point Cloud Pre-Training With Multi-Ratio Masking,
MultMed(26), 2024, pp. 8360-8370.
IEEE DOI 2408
Point cloud compression, Task analysis, Predictive models, Self-supervised learning, Representation learning, representation learning BibRef

Yu, B.X.B.[Bruce X.B.], Chang, J.L.[Jian-Long], Wang, H.X.[Hai-Xin], Liu, L.B.[Ling-Bo], Wang, S.J.[Shi-Jie], Wang, Z.Y.[Zhi-Yu], Lin, J.F.[Jun-Fan], Xie, L.X.[Ling-Xi], Li, H.J.[Hao-Jie], Lin, Z.C.[Zhou-Chen], Tian, Q.[Qi], Chen, C.W.[Chang Wen],
Visual Tuning,
Surveys(56), No. 12, July 2024, pp. xx-yy.
DOI Link 2410
Foundation model, fine-tuning, parameter-efficient, pre-training BibRef

Huang, Y.[Yipo], Li, L.[Leida], Chen, P.F.[Peng-Fei], Wu, H.N.[Hao-Ning], Lin, W.S.[Wei-Si], Shi, G.M.[Guang-Ming],
Multi-Modality Multi-Attribute Contrastive Pre-Training for Image Aesthetics Computing,
PAMI(47), No. 2, February 2025, pp. 1205-1218.
IEEE DOI 2501
Computational modeling, Databases, Image color analysis, Lighting, Contrastive learning, Visualization, Semantics, Reviews, aesthetic representation BibRef

Baraldi, L.[Lorenzo], Amoroso, R.[Roberto], Cornia, M.[Marcella], Baraldi, L.[Lorenzo], Pilzer, A.[Andrea], Cucchiara, R.[Rita],
Learning to mask and permute visual tokens for Vision Transformer pre-training,
CVIU(252), 2025, pp. 104294.
Elsevier DOI Code:
WWW Link. 2502
BibRef

Huseljic, D.[Denis], Herde, M.[Marek], Hahn, P.[Paul], Müjde, M.[Mehmet], Sick, B.[Bernhard],
Systematic Evaluation of Uncertainty Calibration in Pretrained Object Detectors,
IJCV(133), No. 3, March 2025, pp. 1033-1047.
Springer DOI 2502
BibRef

Tian, Y.J.[Yun-Jie], Xie, L.X.[Ling-Xi], Fang, J.[Jiemin], Jiao, J.B.[Jian-Bin], Tian, Q.[Qi],
Beyond masking: Demystifying token-based pre-training for vision transformers,
PR(162), 2025, pp. 111386.
Elsevier DOI 2503
Self-supervised learning, Vision transformers, Token-based pre-training, Masked image modeling BibRef

Huang, L.[Lan], Zeng, J.[Jia], Yu, M.Q.[Meng-Qiang], Ding, W.P.[Wei-Ping], Bai, X.Y.[Xing-Yu], Wang, K.[Kangping],
Efficient feature selection for pre-trained vision transformers,
CVIU(254), 2025, pp. 104326.
Elsevier DOI Code:
WWW Link. 2503
Feature selection, Vision transformer, Model pruning BibRef


Zhang, X.S.[Xiao-Shuai], Wang, Z.C.[Zhi-Cheng], Zhou, H.[Howard], Ghosh, S.[Soham], Gnanapragasam, D.[Danushen], Jampani, V.[Varun], Su, H.[Hao], Guibas, L.J.[Leonidas J.],
Condense: Consistent 2d/3d Pre-training for Dense and Sparse Features from Multi-view Images,
ECCV24(LIV: 19-38).
Springer DOI 2412
3D using pre-trained 2D models BibRef

Feng, T.[Tuo], Wang, W.G.[Wen-Guan], Quan, R.J.[Rui-Jie], Yang, Y.[Yi],
Shape2scene: 3d Scene Representation Learning Through Pre-training on Shape Data,
ECCV24(LV: 73-91).
Springer DOI 2412
BibRef

Tang, Y.W.[Yi-Wen], Zhang, R.[Ray], Liu, J.[JiaMing], Guo, Z.[Zoey], Zhao, B.[Bin], Wang, Z.G.[Zhi-Gang], Gao, P.[Peng], Li, H.S.[Hong-Sheng], Wang, D.[Dong], Li, X.L.[Xue-Long],
Any2point: Empowering Any-modality Large Models for Efficient 3d Understanding,
ECCV24(XXXVI: 456-473).
Springer DOI 2412
Adapt pre-trained 2d to 3d. Code:
WWW Link. BibRef

Zheng, M.Y.[Meng-Yu], Hao, Z.W.[Zhi-Wei], Tang, Y.[Yehui], Xu, C.[Chang],
Visual Prompting via Partial Optimal Transport,
ECCV24(XXXV: 1-18).
Springer DOI 2412
BibRef

Wu, S.[Shuchi], Ma, C.[Chuan], Wei, K.[Kang], Xu, X.G.[Xiao-Gang], Ding, M.[Ming], Qian, Y.[Yuwen], Xiao, D.[Di], Xiang, T.[Tao],
Refine, Discriminate and Align: Stealing Encoders via Sample-wise Prototypes and Multi-relational Extraction,
ECCV24(XXXIV: 186-203).
Springer DOI 2412
Code:
WWW Link. BibRef

Choi, H.[Hyesong], Park, H.[Hyejin], Yi, K.M.[Kwang Moo], Cha, S.[Sungmin], Min, D.B.[Dong-Bo],
Salience-based Adaptive Masking: Revisiting Token Dynamics for Enhanced Pre-Training,
ECCV24(LXXVIII: 343-359).
Springer DOI 2412
BibRef

Huynh, A.V.[Andy V.], Gillespie, L.E.[Lauren E.], Lopez-Saucedo, J.[Jael], Tang, C.[Claire], Sikand, R.[Rohan], Expósito-Alonso, M.[Moisés],
Contrastive Ground-level Image and Remote Sensing Pre-training Improves Representation Learning for Natural World Imagery,
ECCV24(LXXX: 173-190).
Springer DOI 2412
BibRef

Luo, H.[Hao], Zhou, B.[Bohan], Lu, Z.Q.[Zong-Qing],
Pre-trained Visual Dynamics Representations for Efficient Policy Learning,
ECCV24(LXXXI: 249-267).
Springer DOI 2412
BibRef

Choi, H.[Hyesong], Lee, H.[Hunsang], Joung, S.[Seyoung], Park, H.[Hyejin], Kim, J.Y.[Ji-Yeong], Min, D.B.[Dong-Bo],
Emerging Property of Masked Token for Effective Pre-training,
ECCV24(LXXVI: 272-289).
Springer DOI 2412
BibRef

Zhang, Y.Y.[Ying-Ying], Guo, X.[Xin], Lao, J.W.[Jiang-Wei], Yu, L.[Lei], Ru, L.X.[Li-Xiang], Wang, J.[Jian], Ye, G.[Guo], He, H.M.[Hui-Mei], Chen, J.D.[Jing-Dong], Yang, M.[Ming],
POA: Pre-training Once for Models of All Sizes,
ECCV24(III: 131-148).
Springer DOI 2412
BibRef

Nakamura, R.[Ryo], Tadokoro, R.[Ryu], Yamada, R.[Ryosuke], Asano, Y.M.[Yuki M.], Laina, I.[Iro], Rupprecht, C.[Christian], Inoue, N.[Nakamasa], Yokota, R.[Rio], Kataoka, H.[Hirokatsu],
Scaling Backwards: Minimal Synthetic Pre-Training?,
ECCV24(XV: 153-171).
Springer DOI 2412
BibRef

Yamada, R.[Ryosuke], Hara, K.[Kensho], Kataoka, H.[Hirokatsu], Makihara, K.[Koshi], Inoue, N.[Nakamasa], Yokota, R.[Rio], Satoh, Y.[Yutaka],
Formula-Supervised Visual-Geometric Pre-Training,
ECCV24(XXII: 57-74).
Springer DOI 2412
BibRef

Zhang, L.[Lixuan], Kan, M.[Meina], Shan, S.G.[Shi-Guang], Chen, X.L.[Xi-Lin],
Prelar: World Model Pre-Training with Learnable Action Representation,
ECCV24(XXIII: 185-201).
Springer DOI 2412
BibRef

Yang, M.Y.[Meng-Yu], Tian, Y.[Ye], Zhang, L.[Lanshan], Liang, X.[Xiao], Ran, X.M.[Xu-Ming], Wang, W.D.[Wen-Dong],
AdaViPro: Region-Based Adaptive Visual Prompt for Large-Scale Models Adapting,
ICIP24(1316-1322)
IEEE DOI 2411
Training, Adaptation models, Visualization, Image resolution, Accuracy, Decision making, Benchmark testing BibRef

Li, X.[Xiang], Togo, R.[Ren], Maeda, K.[Keisuke], Ogawa, T.[Takahiro], Haseyama, M.[Miki],
Reinforcing Pre-Trained Models Using Counterfactual Images,
ICIP24(486-492)
IEEE DOI 2411
Deep learning, Training, Image recognition, Decision making, Data augmentation, Robustness, Data models, Deep learning BibRef

Marathe, K.[Kalyani], Bigverdi, M.[Mahtab], Khan, N.[Nishat], Kundu, T.[Tuhin], Howe, P.[Patrick], Ranjit, S.S.[S. Sharan], Bhattad, A.[Anand], Kembhavi, A.[Aniruddha], Shapiro, L.G.[Linda G.], Krishna, R.[Ranjay],
MIMIC: Masked Image Modeling with Image Correspondences,
L3D24(718-727)
IEEE DOI 2410
Representation learning, Point cloud compression, Soft sensors, Image edge detection, MIMICs, Buildings, Masked Image Modeling, Dense representation learning BibRef

Han, K.[Kai], Wang, Y.H.[Yun-He], Guo, J.[Jianyuan], Wu, E.[Enhua],
ParameterNet: Parameters are All You Need for Large-Scale Visual Pretraining of Mobile Networks,
CVPR24(15751-15761)
IEEE DOI Code:
WWW Link. 2410
Convolutional codes, Visualization, Accuracy, Transformers BibRef

Zhao, Z.Y.[Zhi-Yu], Huang, B.K.[Bing-Kun], Xing, S.[Sen], Wu, G.S.[Gang-Shan], Qiao, Y.[Yu], Wang, L.M.[Li-Min],
Asymmetric Masked Distillation for Pre-Training Small Foundation Models,
CVPR24(18516-18526)
IEEE DOI Code:
WWW Link. 2410
Adaptation models, Accuracy, Image recognition, Computational modeling, Transformer cores, Transformers BibRef

Chiche, B.N.[Benjamin Naoto], Horikawa, Y.[Yuto], Fujita, R.[Ryo],
Pre-Training Vision Models with Mandelbulb Variations,
CVPR24(22062-22071)
IEEE DOI 2410
Training, Ethics, Accuracy, Licenses, Transformers, Formula-driven supervised learning, pre-training, mandelbulb BibRef

Miao, Y.[Yibo], Lei, Y.[Yu], Zhou, F.[Feng], Deng, Z.J.[Zhi-Jie],
Bayesian Exploration of Pre-Trained Models for Low-Shot Image Classification,
CVPR24(23849-23859)
IEEE DOI 2410
Uncertainty, Computational modeling, Probabilistic logic, Robustness, Bayes methods, Kernel, low-shot, classification BibRef

Noman, M.[Mubashir], Naseer, M.[Muzammal], Cholakkal, H.[Hisham], Anwar, R.M.[Rao Muhammad], Khan, S.[Salman], Khan, F.S.[Fahad Shahbaz],
Rethinking Transformers Pre-training for Multi-Spectral Satellite Imagery,
CVPR24(27811-27819)
IEEE DOI Code:
WWW Link. 2410
Image resolution, Transformers, Optical imaging, Satellite images, Optical sensors, Remote sensing, multi-spectral imagery BibRef

Obadic, I.[Ivica], Levering, A.[Alex], Pennig, L.[Lars], Oliveira, D.[Dario], Marcos, D.[Diego], Zhu, X.X.[Xiao-Xiang],
Contrastive Pretraining for Visual Concept Explanations of Socioeconomic Outcomes,
EarthVision24(575-584)
IEEE DOI 2410
Deep learning, Training, Visualization, Sensitivity, Vegetation mapping, Predictive models, Vectors, contrastive-pretraining BibRef

Koch, S.[Sebastian], Hermosilla, P.[Pedro], Vaskevicius, N.[Narunas], Colosi, M.[Mirco], Ropinski, T.[Timo],
Lang3DSG: Language-based contrastive pre-training for 3D Scene Graph prediction,
3DV24(1037-1047)
IEEE DOI 2408
Training, Point cloud compression, Knowledge engineering, Solid modeling, Semantics, Natural languages, 3D Scene Graph, GCN BibRef

Sadhu, A.[Arka], Nevatia, R.[Ram],
Leveraging Task-Specific Pre-Training to Reason across Images and Videos,
WACV24(5782-5792)
IEEE DOI 2404
Visualization, Image recognition, Annotations, Focusing, Cognition, Data models, Algorithms, Vision + language and/or other modalities BibRef

Lin, J.Y.[Jia-Ying], Lau, R.W.H.[Rynson W. H.],
Self-supervised Pre-training for Mirror Detection,
ICCV23(12193-12202)
IEEE DOI Code:
WWW Link. 2401
BibRef

Zha, Y.[Yaohua], Wang, J.P.[Jin-Peng], Dai, T.[Tao], Chen, B.[Bin], Wang, Z.[Zhi], Xia, S.T.[Shu-Tao],
Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models,
ICCV23(14115-14124)
IEEE DOI Code:
WWW Link. 2401
BibRef

Kil, J.[Jihyung], Changpinyo, S.[Soravit], Chen, X.[Xi], Hu, H.X.[He-Xiang], Goodman, S.[Sebastian], Chao, W.L.[Wei-Lun], Soricut, R.[Radu],
PreSTU: Pre-Training for Scene-Text Understanding,
ICCV23(15224-15234)
IEEE DOI 2401
BibRef

Huang, D.[Di], Peng, S.[Sida], He, T.[Tong], Yang, H.H.[Hong-Hui], Zhou, X.W.[Xiao-Wei], Ouyang, W.L.[Wan-Li],
Ponder: Point Cloud Pre-training via Neural Rendering,
ICCV23(16043-16052)
IEEE DOI 2401
BibRef

Mendieta, M.[Matías], Han, B.[Boran], Shi, X.J.[Xing-Jian], Zhu, Y.[Yi], Chen, C.[Chen],
Towards Geospatial Foundation Models via Continual Pretraining,
ICCV23(16760-16770)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gao, M.Z.[Ming-Ze], Wang, Q.L.[Qi-Long], Lin, Z.[Zhenyi], Zhu, P.F.[Peng-Fei], Hu, Q.H.[Qing-Hua], Zhou, J.B.[Jing-Bo],
Tuning Pre-trained Model via Moment Probing,
ICCV23(11769-11779)
IEEE DOI 2401
BibRef

Wang, J.R.[Jian-Ren], Dasari, S.[Sudeep], Srirama, M.K.[Mohan Kumar], Tulsiani, S.[Shubham], Gupta, A.[Abhinav],
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations,
ICCV23(3836-3845)
IEEE DOI Code:
WWW Link. 2401
BibRef

Wang, Z.J.[Zi-Jian], Luo, Y.[Yadan], Zheng, L.[Liang], Huang, Z.[Zi], Baktashmotlagh, M.[Mahsa],
How Far Pre-trained Models Are from Neural Collapse on the Target Dataset Informs their Transferability,
ICCV23(5526-5535)
IEEE DOI 2401
BibRef

Jain, N.[Nishant], Behl, H.[Harkirat], Rawat, Y.S.[Yogesh Singh], Vineet, V.[Vibhav],
Efficiently Robustify Pre-Trained Models,
ICCV23(5482-5492)
IEEE DOI 2401
BibRef

Kim, B.[Bumsoo], Jo, Y.[Yeonsik], Kim, J.[Jinhyung], Kim, S.[Seunghwan],
Misalign, Contrast then Distill: Rethinking Misalignments in Language-Image Pretraining,
ICCV23(2563-2572)
IEEE DOI 2401
BibRef

Wang, A.[Angelina], Russakovsky, O.[Olga],
Overwriting Pretrained Bias with Finetuning Data,
ICCV23(3934-3945)
IEEE DOI 2401
BibRef

Chavhan, R.[Ruchika], Gouk, H.[Henry], Li, D.[Da], Hospedales, T.M.[Timothy M.],
Quality Diversity for Visual Pre-Training,
ICCV23(5361-5371)
IEEE DOI Code:
WWW Link. 2401
BibRef

Singh, M.[Mannat], Duval, Q.[Quentin], Alwala, K.V.[Kalyan Vasudev], Fan, H.Q.[Hao-Qi], Aggarwal, V.[Vaibhav], Adcock, A.[Aaron], Joulin, A.[Armand], Dollár, P.[Piotr], Feichtenhofer, C.[Christoph], Girshick, R.[Ross], Girdhar, R.[Rohit], Misra, I.[Ishan],
The effectiveness of MAE pre-pretraining for billion-scale pretraining,
ICCV23(5461-5471)
IEEE DOI 2401
BibRef

Fu, C.[Cheng], Huang, H.X.[Han-Xian], Jiang, Z.X.[Zi-Xuan], Ni, Y.[Yun], Nai, L.F.[Li-Feng], Wu, G.[Gang], Cheng, L.Q.[Li-Qun], Zhou, Y.Q.[Yan-Qi], Li, S.[Sheng], Li, A.[Andrew], Zhao, J.[Jishen],
TripLe: Revisiting Pretrained Model Reuse and Progressive Learning for Efficient Vision Transformer Scaling and Searching,
ICCV23(17107-17117)
IEEE DOI 2401
BibRef

Li, D.Q.[Dai-Qing], Ling, H.[Huan], Kar, A.[Amlan], Acuna, D.[David], Kim, S.W.[Seung Wook], Kreis, K.[Karsten], Torralba, A.[Antonio], Fidler, S.[Sanja],
DreamTeacher: Pretraining Image Backbones with Deep Generative Models,
ICCV23(16652-16662)
IEEE DOI 2401
BibRef

Lew, B.G.[Byoung-Gyu], Son, D.H.[Dong-Hyun], Chang, B.[Buru],
Gradient Estimation for Unseen Domain Risk Minimization with Pre-Trained Models,
OutDistri23(4438-4448)
IEEE DOI 2401
BibRef

Liu, S.[Sheng], Huynh, C.P.[Cong Phuoc], Chen, C.[Cong], Arap, M.[Maxim], Hamid, R.[Raffay],
LEMaRT: Label-Efficient Masked Region Transform for Image Harmonization,
CVPR23(18290-18299)
IEEE DOI 2309
BibRef

Wang, Y.M.[Yao-Ming], Shi, B.[Bowen], Zhang, X.P.[Xiao-Peng], Li, J.[Jin], Liu, Y.C.[Yu-Chen], Dai, W.R.[Wen-Rui], Li, C.L.[Cheng-Lin], Xiong, H.K.[Hong-Kai], Tian, Q.[Qi],
Adapting Shortcut with Normalizing Flow: An Efficient Tuning Framework for Visual Recognition,
CVPR23(15965-15974)
IEEE DOI 2309

WWW Link. BibRef

Ni, M.H.[Min-Heng], Huang, H.Y.[Hao-Yang], Su, L.[Lin], Cui, E.[Edward], Bharti, T.[Taroon], Wang, L.J.[Li-Juan], Zhang, D.D.[Dong-Dong], Duan, N.[Nan],
M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training,
CVPR21(3976-3985)
IEEE DOI 2111
Training, Computational modeling, Semantics, Image retrieval, Benchmark testing, Data models BibRef

Li, T.J.[Tian-Jiao], Foo, L.G.[Lin Geng], Hu, P.[Ping], Shang, X.[Xindi], Rahmani, H.[Hossein], Yuan, Z.H.[Ze-Huan], Liu, J.[Jun],
Token Boosting for Robust Self-Supervised Visual Transformer Pre-training,
CVPR23(24027-24038)
IEEE DOI 2309
BibRef

Yan, X.Y.[Xiang-Yi], Naushad, J.[Junayed], Sun, S.L.[Shan-Lin], Han, K.[Kun], Tang, H.[Hao], Kong, D.Y.[De-Ying], Ma, H.Y.[Hao-Yu], You, C.Y.[Chen-Yu], Xie, X.H.[Xiao-Hui],
Representation Recovering for Self-Supervised Pre-training on Medical Images,
WACV23(2684-2694)
IEEE DOI 2302
Representation learning, Visualization, Image segmentation, Semantics, Self-supervised learning, Feature extraction BibRef

Lee, K.Y.[Kuan-Ying], Zhong, Y.[Yuanyi], Wang, Y.X.[Yu-Xiong],
Do Pre-trained Models Benefit Equally in Continual Learning?,
WACV23(6474-6482)
IEEE DOI 2302
Training, Systematics, Codes, Computational modeling, Pipelines, Benchmark testing, Algorithms: Machine learning architectures, and algorithms (including transfer) BibRef

Su, W.J.[Wei-Jie], Zhu, X.Z.[Xi-Zhou], Tao, C.X.[Chen-Xin], Lu, L.W.[Le-Wei], Li, B.[Bin], Huang, G.[Gao], Qiao, Y.[Yu], Wang, X.G.[Xiao-Gang], Zhou, J.[Jie], Dai, J.F.[Ji-Feng],
Towards All-in-One Pre-Training via Maximizing Multi-Modal Mutual Information,
CVPR23(15888-15899)
IEEE DOI 2309
BibRef

Wei, L.H.[Long-Hui], Xie, L.X.[Ling-Xi], Zhou, W.G.[Wen-Gang], Li, H.Q.[Hou-Qiang], Tian, Q.[Qi],
MVP: Multimodality-Guided Visual Pre-training,
ECCV22(XXX:337-353).
Springer DOI 2211
BibRef

Yuan, Z.W.[Zhuo-Wen], Wu, F.[Fan], Long, Y.H.[Yun-Hui], Xiao, C.W.[Chao-Wei], Li, B.[Bo],
SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination,
ECCV22(V:139-155).
Springer DOI 2211
BibRef

Yang, J.W.[Jia-Wei], Chen, H.[Hanbo], Liang, Y.[Yuan], Huang, J.Z.[Jun-Zhou], He, L.[Lei], Yao, J.H.[Jian-Hua],
ConCL: Concept Contrastive Learning for Dense Prediction Pre-training in Pathology Images,
ECCV22(XXI:523-539).
Springer DOI 2211
BibRef

You, H.X.[Hao-Xuan], Zhou, L.W.[Luo-Wei], Xiao, B.[Bin], Codella, N.[Noel], Cheng, Y.[Yu], Xu, R.C.[Ruo-Chen], Chang, S.F.[Shih-Fu], Yuan, L.[Lu],
Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training,
ECCV22(XXVII:69-87).
Springer DOI 2211
BibRef

Chakraborty, S.[Shuvam], Uzkent, B.[Burak], Ayush, K.[Kumar], Tanmay, K.[Kumar], Sheehan, E.[Evan], Ermon, S.[Stefano],
Efficient Conditional Pre-training for Transfer Learning,
L3D-IVU22(4240-4249)
IEEE DOI 2210
Training, Costs, Image resolution, Filtering, Computational modeling, Transfer learning BibRef

Li, Z.W.[Zhao-Wen], Zhu, Y.S.[You-Song], Yang, F.[Fan], Li, W.[Wei], Zhao, C.Y.[Chao-Yang], Chen, Y.Y.[Ying-Ying], Chen, Z.Y.[Zhi-Yang], Xie, J.H.[Jia-Hao], Wu, L.W.[Li-Wei], Zhao, R.[Rui], Tang, M.[Ming], Wang, J.Q.[Jin-Qiao],
UniVIP: A Unified Framework for Self-Supervised Visual Pre-training,
CVPR22(14607-14616)
IEEE DOI 2210
Representation learning, Visualization, Image segmentation, Correlation, Semantics, Self-supervised learning, Object detection, Transfer/low-shot/long-tail learning BibRef

Li, W.[Wei], Xie, J.H.[Jia-Hao], Loy, C.C.[Chen Change],
Correlational Image Modeling for Self-Supervised Visual Pre-Training,
CVPR23(15105-15115)
IEEE DOI 2309
BibRef

Jia, M.L.[Meng-Lin], Tang, L.[Luming], Chen, B.C.[Bor-Chun], Cardie, C.[Claire], Belongie, S.[Serge], Hariharan, B.[Bharath], Lim, S.N.[Ser-Nam],
Visual Prompt Tuning,
ECCV22(XXXIII:709-727).
Springer DOI 2211

WWW Link. Adapt pre-trainted model BibRef

Xu, C.F.[Chen-Feng], Li, T.[Tian], Tang, C.[Chen], Sun, L.F.[Ling-Feng], Keutzer, K.[Kurt], Tomizuka, M.[Masayoshi], Fathi, A.[Alireza], Zhan, W.[Wei],
PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map,
ECCV22(XXIX:34-50).
Springer DOI 2211
BibRef

Wei, C.[Chen], Fan, H.Q.[Hao-Qi], Xie, S.[Saining], Wu, C.Y.[Chao-Yuan], Yuille, A.L.[Alan L.], Feichtenhofer, C.[Christoph],
Masked Feature Prediction for Self-Supervised Visual Pre-Training,
CVPR22(14648-14658)
IEEE DOI 2210
Deep learning, Visualization, Histograms, Computational modeling, Transfer learning, Predictive models, Video analysis and understanding BibRef

Mishra, S.[Samarth], Panda, R.[Rameswar], Phoo, C.P.[Cheng Perng], Chen, C.F.R.[Chun-Fu Richard], Karlinsky, L.[Leonid], Saenko, K.[Kate], Saligrama, V.[Venkatesh], Feris, R.S.[Rogerio S.],
Task2Sim: Towards Effective Pre-training and Transfer from Synthetic Data,
CVPR22(9184-9194)
IEEE DOI 2210
Graphics, Training, Representation learning, Adaptation models, Computational modeling, Data models, retrieval BibRef

Singh, M.[Mannat], Gustafson, L.[Laura], Adcock, A.[Aaron], de Freitas-Reis, V.[Vinicius], Gedik, B.[Bugra], Kosaraju, R.P.[Raj Prateek], Mahajan, D.[Dhruv], Girshick, R.[Ross], Dollár, P.[Piotr], van der Maaten, L.[Laurens],
Revisiting Weakly Supervised Pre-Training of Visual Perception Models,
CVPR22(794-804)
IEEE DOI 2210
Visualization, Computational modeling, Supervised learning, Self-supervised learning, Standards, Transfer/low-shot/long-tail learning BibRef

Cha, J.[Junbum], Lee, K.[Kyungjae], Park, S.[Sungrae], Chun, S.[Sanghyuk],
Domain Generalization by Mutual-Information Regularization with Pre-trained Models,
ECCV22(XXIII:440-457).
Springer DOI 2211
BibRef

Kim, D.H.[Dong-Hyun], Wang, K.[Kaihong], Sclaroff, S.[Stan], Saenko, K.[Kate],
A Broad Study of Pre-training for Domain Generalization and Adaptation,
ECCV22(XXXIII:621-638).
Springer DOI 2211
BibRef

Zhu, X.Z.[Xi-Zhou], Zhu, J.G.[Jin-Guo], Li, H.[Hao], Wu, X.S.[Xiao-Shi], Li, H.S.[Hong-Sheng], Wang, X.H.[Xiao-Hua], Dai, J.F.[Ji-Feng],
Uni-Perceiver: Pre-training Unified Architecture for Generic Perception for Zero-shot and Few-shot Tasks,
CVPR22(16783-16794)
IEEE DOI 2210
Representation learning, Costs, Collaboration, Transformers, Data models, BibRef

Wang, X.L.[Xin-Long], Zhang, R.F.[Ru-Feng], Shen, C.H.[Chun-Hua], Kong, T.[Tao], Li, L.[Lei],
Dense Contrastive Learning for Self-Supervised Visual Pre-Training,
CVPR21(3023-3032)
IEEE DOI 2111
Learning systems, Image segmentation, Visualization, Computational modeling, Semantics, Object detection BibRef

Mañas, O.[Oscar], Lacoste, A.[Alexandre], Giró-i-Nieto, X.[Xavier], Vazquez, D.[David], Rodríguez, P.[Pau],
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data,
ICCV21(9394-9403)
IEEE DOI 2203
Earth, Deep learning, Satellites, Transfer learning, Pipelines, Supervised learning, Data models, Vision applications and systems BibRef

Zhang, Y.[Youshan], Davison, B.D.[Brian D.],
Efficient Pre-trained Features and Recurrent Pseudo-Labeling in Unsupervised Domain Adaptation,
LLID21(2713-2722)
IEEE DOI 2109
Training, Adaptation models, Computational modeling, Benchmark testing BibRef

Chowdhury, A.[Arkabandhu], Jiang, M.C.[Ming-Chao], Chaudhuri, S.[Swarat], Jermaine, C.[Chris],
Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier,
ICCV21(9425-9434)
IEEE DOI 2203
Transfer learning, Feature extraction, Libraries, Computational efficiency, Classification algorithms, Feeds, Vision applications and systems BibRef

Kim, D.H.[Dong-Hyun], Saito, K.[Kuniaki], Oh, T.H.[Tae-Hyun], Plummer, B.A.[Bryan A.], Sclaroff, S.[Stan], Saenko, K.[Kate],
CDS: Cross-Domain Self-supervised Pre-training,
ICCV21(9103-9112)
IEEE DOI 2203
Transfer learning, Task analysis, Standards, Transfer/Low-shot/Semi/Unsupervised Learning, Representation learning BibRef

Zhang, J.O.[Jeffrey O.], Sax, A.[Alexander], Zamir, A.[Amir], Guibas, L.J.[Leonidas J.], Malik, J.[Jitendra],
Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks,
ECCV20(III:698-714).
Springer DOI 2012
Adapt pre-trained network, not start from beginning. BibRef

Yan, X.T.[Xue-Ting], Misra, I.[Ishan], Gupta, A.[Abhinav], Ghadiyaram, D.[Deepti], Mahajan, D.[Dhruv],
ClusterFit: Improving Generalization of Visual Representations,
CVPR20(6508-6517)
IEEE DOI 2008
Pre-training. Task analysis, Training, Feature extraction, Visualization, Videos, Tagging, Twitter BibRef

Tang, H.X.[Hong-Xiang], Ortis, A.[Alessandro], Battiato, S.[Sebastiano],
The Impact of Padding on Image Classification by Using Pre-trained Convolutional Neural Networks,
CIAP19(II:337-344).
Springer DOI 1909
BibRef

Chakraborty, R., Yang, C., Vemuri, B.C.,
A Mixture Model for Aggregation of Multiple Pre-Trained Weak Classifiers,
Diff-CVML18(454-4547)
IEEE DOI 1812
Feature extraction, Training, Frequency modulation, Boosting, Geometry, Nickel, Mixture models BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Domain Adaptation .


Last update:Mar 17, 2025 at 20:02:03