14.1.5 Training Set Size, Sample Size, Analysis, Selection

Chapter Contents (Back)
Training Set. Sample Size.
See also Small Sample Sizes Issues, Data analysis, Training Sets.
See also Unbalanced Datasets, Imbalanced Sample Sizes, Imbalanced Data, Long-Tailed Data.

Baum, L.E., Petrie, T., Soules, G., Weiss, N.,
A Maximizzation Technique Occurring in teh Statistical Analysis of Probabilistic Function of Markov Chains,
AMS(41), No. 1, 1970, pp. 164-171. BibRef 7000

Kanal, L.[Laveen], Chandrasekaran, B.,
On dimensionality and sample size in statistical pattern classification,
PR(3), No. 3, October 1971, pp. 225-234.
Elsevier DOI 0309
BibRef

Wilson, D.L.,
Asymptotic properties of nearest neighbor rules using edited data,
SMC(2), 1972, pp. 408-421. Remove from training set those items mis-classified by the chosen rules. BibRef 7200

Jain, A.K.[Anil K.], Dubes, R.C.[Richard C.],
Feature definition in pattern recognition with small sample size,
PR(10), No. 2, 1978, pp. 85-97.
Elsevier DOI 0309
BibRef

Kalayeh, H.M., Muasher, M.J., and Landgrebe, D.A.,
Feature Selection When Limited Numbers of Training Samples are Available,
GeoRS(21), No. 4, October 1983, pp. 434-438.
IEEE Top Reference. BibRef 8310

Muasher, M.J., and Landgrebe, D.A.,
The K-L Expansion as an Effective Feature Ordering Technique for Limited Training Sample Size,
GeoRS(21), No. 4, October 1983, pp. 438-441.
IEEE Top Reference. BibRef 8310

Kalayeh, H.M., Landgrebe, D.A.,
Predicting the Required Number of Training Samples,
PAMI(5), No. 6, November 1983, pp. 664-666. BibRef 8311

Muasher, M.J., and Landgrebe, D.A.,
A Binary Tree Feature Selection Technique for Limited Training Set Size,
RSE(16), No. 3, December 1984, pp. 183-194. BibRef 8412

Landgrebe, D.A., and Malaret, E.R.,
Noise in Remote Sensing Systems: Effect on Classification Accuracy,
GeoRS(24), No. 2, March 1986, pp. 294-299.
IEEE Top Reference. BibRef 8603

Shahshahani, B.M.[Behzad M.], and Landgrebe, D.A.[David A.],
The Effect of Unlabeled Samples in Reducing the Small Sample Size Problem and Mitigating the Hughes Phenomenon,
GeoRS(32), No. 5, September 1994, pp. 1087-1095.
IEEE DOI
PDF File. BibRef 9409

Hoffbeck, J.P.[Joseph P.], Landgrebe, D.A.,
Covariance-Matrix Estimation and Classification with Limited Training Data,
PAMI(18), No. 7, July 1996, pp. 763-767.
IEEE DOI
PDF File. 9608
BibRef

Herbst, K.[Klaus],
Pattern recognition by polynomial canonical regression,
PR(17), No. 3, 1984, pp. 345-350.
Elsevier DOI 0309
BibRef

Wharton, S.W.[Stephen W.],
An analysis of the effects of sample size on classification performance of a histogram based cluster analysis procedure,
PR(17), No. 2, 1984, pp. 239-244.
Elsevier DOI 0309
BibRef

Djouadi, A., Snorrason, O., and Garber, F.D.,
The Quality of Training-Sample Estimates of the Bhattacharyya Coefficient,
PAMI(12), No. 1, January 1990, pp. 92-97.
IEEE DOI BibRef 9001

Hong, Z.Q.[Zi-Quan], Yang, J.Y.[Jing-Yu],
Optimal discriminant plane for a small number of samples and design method of classifier on the plane,
PR(24), No. 4, 1991, pp. 317-324.
Elsevier DOI 0401
BibRef

Rachkovskij, D.A., Kussul, E.M.,
Datagen: A Generator of Datasets for Evaluation of Classification Algorithms,
PRL(19), No. 7, May 1998, pp. 537-544. 9808
BibRef

Sánchez, J.S., Barandela, R., Marqués, A.I., Alejo, R., Badenas, J.,
Analysis of new techniques to obtain quality training sets,
PRL(24), No. 7, April 2003, pp. 1015-1022.
Elsevier DOI 0301
BibRef

Chen, D.M.[Dong-Mei], Stow, D.[Douglas],
The Effect of Training Strategies on Supervised Classification at Different Spatial Resolutions,
PhEngRS(68), No. 11, November 2002, pp. 1155-1162. Three different training strategies often used for supervised classification are compared for six image subsets containing a single land-use/land-cover component and at five different spatial resolutions.
WWW Link. 0304
BibRef

Beiden, S.V.[Sergey V.], Maloof, M.A.[Marcus A.], Wagner, R.F.[Robert F.],
A general model for finite-sample effects in training and testing of competing classifiers,
PAMI(25), No. 12, December 2003, pp. 1561-1569.
IEEE Abstract. 0401
More than size of sample set. BibRef

Sánchez, J.S.,
High training set size reduction by space partitioning and prototype abstraction,
PR(37), No. 7, July 2004, pp. 1561-1564.
Elsevier DOI 0405
BibRef

Wang, H.C.[Hai-Chuan], Zhang, L.M.[Li-Ming],
Linear generalization probe samples for face recognition,
PRL(25), No. 8, June 2004, pp. 829-840.
Elsevier DOI 0405
Generate probe sets using constrained linear subspace of the original probes. BibRef

Prudêncio, R.B.C.[Ricardo B. C.], Ludermir, T.B.[Teresa B.], de Carvalho, F.A.T.[Francisco A. T.],
A Modal Symbolic Classifier for selecting time series models,
PRL(25), No. 8, June 2004, pp. 911-921.
Elsevier DOI 0405
BibRef

Kuo, B.C., Chang, K.Y.,
Feature Extractions for Small Sample Size Classification Problem,
GeoRS(45), No. 3, March 2007, pp. 756-764.
IEEE DOI 0703
BibRef

Angiulli, F.[Fabrizio],
Condensed Nearest Neighbor Data Domain Description,
PAMI(29), No. 10, October 2007, pp. 1746-1758.
IEEE DOI 0710
Distinguish between normal and abnormal data to find the minimal subset of consistent data. BibRef

Farhangfar, A.[Alireza], Kurgan, L.A.[Lukasz A.], Dy, J.G.[Jennifer G.],
Impact of imputation of missing values on classification error for discrete data,
PR(41), No. 12, December 2008, pp. 3692-3705.
Elsevier DOI 0810
Missing values; Classification; Imputation of missing values; Single imputation; Multiple imputations For databases. studies the effect of missing data imputation using five single imputation methods (a mean method, a Hot deck method, a Naive-Bayes method, and the latter two methods with a recently proposed imputation framework) and one multiple imputation method (a polytomous regression based method) on classification accuracy for six popular classifiers (RIPPER, C4.5, K-nearest-neighbor, support vector machine with polynomial and RBF kernels, and Naive-Bayes) on 15 datasets. BibRef

Koikkalainen, J., Tolli, T., Lauerma, K., Antila, K., Mattila, E., Lilja, M., Lotjonen, J.,
Methods of Artificial Enlargement of the Training Set for Statistical Shape Models,
MedImg(27), No. 11, November 2008, pp. 1643-1654.
IEEE DOI 0811
Heart images. BibRef

Peres, R.T., Pedreira, C.E.,
Generalized Risk Zone: Selecting Observations for Classification,
PAMI(31), No. 7, July 2009, pp. 1331-1337.
IEEE DOI 0905
Select key observations in sample set. BibRef

Levi, D.[Dan], Ullman, S.[Shimon],
Learning to classify by ongoing feature selection,
IVC(28), No. 4, April 2010, pp. 715-723.
Elsevier DOI 1002
BibRef
Earlier: CRV06(1-1).
IEEE DOI 0607
Continuous updating of the clustering based on new inputs. Online learning; Object recognition BibRef

Levi, D.[Dan], Ullman, S.[Shimon],
Learning Model Complexity in an Online Environment,
CRV09(260-267).
IEEE DOI 0905
BibRef

Ambai, M.[Mitsuru], Yoshida, Y.[Yuichi],
Augmenting Training Samples with a Large Number of Rough Segmentation Datasets,
IEICE(E94-D), No. 10, October 2011, pp. 1880-1888.
WWW Link. 1110
BibRef

Rico-Juan, J.R.[Juan Ramón], Iñesta, J.M.[José Manuel],
New rank methods for reducing the size of the training set using the nearest neighbor rule,
PRL(33), No. 5, 1 April 2012, pp. 654-660.
Elsevier DOI BibRef 1204
And: Corrigendum: PRL(33), No. 10, 15 July 2012, pp. 1434.
Elsevier DOI 1205
Editing; Condensing; Rank methods; Sorted prototypes selection BibRef

Körting, T.S.[Thales Sehn], Garcia Fonseca, L.M.[Leila Maria], Castejon, E.F.[Emiliano Ferreira], Namikawa, L.M.[Laercio Massaru],
Improvements in Sample Selection Methods for Image Classification,
RS(6), No. 8, 2014, pp. 7580-7591.
DOI Link 1410
BibRef

Cheng, D.S.[Dong Seon], Setti, F.[Francesco], Zeni, N.[Nicola], Ferrario, R.[Roberta], Cristani, M.[Marco],
Semantically-driven automatic creation of training sets for object recognition,
CVIU(131), No. 1, 2015, pp. 56-71.
Elsevier DOI 1412
Object recognition BibRef

Hamidzadeh, J.[Javad], Monsefi, R.[Reza], Yazdi, H.S.[Hadi Sadoghi],
IRAHC: Instance Reduction Algorithm using Hyperrectangle Clustering,
PR(48), No. 5, 2015, pp. 1878-1889.
Elsevier DOI 1502
Instance reduction BibRef

Ma, L.[Lei], Cheng, L.[Liang], Li, M.C.[Man-Chun], Liu, Y.X.[Yong-Xue], Ma, X.X.[Xiao-Xue],
Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery,
PandRS(102), No. 1, 2015, pp. 14-27.
Elsevier DOI 1503
GEOBIA BibRef

Yang, W.[Wen], Yin, X.S.[Xiao-Shuang], Xia, G.S.[Gui-Song],
Learning High-level Features for Satellite Image Classification With Limited Labeled Samples,
GeoRS(53), No. 8, August 2015, pp. 4472-4482.
IEEE DOI 1506
Gaussian distribution BibRef

Wu, H.[Hao], Miao, Z.J.[Zhen-Jiang], Chen, J.Y.[Jing-Yue], Yang, J.[Jie], Gao, X.[Xing],
Recognition improvement through the optimisation of learning instances,
IET-CV(9), No. 3, 2015, pp. 419-427.
DOI Link 1507
computer vision BibRef

Colditz, R.R.[René Roland],
An Evaluation of Different Training Sample Allocation Schemes for Discrete and Continuous Land Cover Classification Using Decision Tree-Based Algorithms,
RS(7), No. 8, 2015, pp. 9655.
DOI Link 1509
BibRef

Huang, X.[Xin], Weng, C.L.[Chun-Lei], Lu, Q.K.[Qi-Kai], Feng, T.T.[Tian-Tian], Zhang, L.P.[Liang-Pei],
Automatic Labelling and Selection of Training Samples for High-Resolution Remote Sensing Image Classification over Urban Areas,
RS(7), No. 12, 2015, pp. 15819.
DOI Link 1601
BibRef

Ekambaram, R.[Rajmadhan], Fefilatyev, S.[Sergiy], Shreve, M.[Matthew], Kramer, K.[Kurt], Hall, L.O.[Lawrence O.], Goldgof, D.B.[Dmitry B.], Kasturi, R.[Rangachar],
Active cleaning of label noise,
PR(51), No. 1, 2016, pp. 463-480.
Elsevier DOI 1601
Support vectors BibRef

Jamjoom, M.[Mona], El Hindi, K.[Khalil],
Partial instance reduction for noise elimination,
PRL(74), No. 1, 2016, pp. 30-37.
Elsevier DOI 1604
Noise filtering. Learning with noisy data. BibRef

Akata, Z.[Zeynep], Perronnin, F.[Florent], Harchaoui, Z.[Zaid], Schmid, C.[Cordelia],
Label-Embedding for Image Classification,
PAMI(38), No. 7, July 2016, pp. 1425-1438.
IEEE DOI 1606
BibRef
Earlier:
Label-Embedding for Attribute-Based Classification,
CVPR13(819-826)
IEEE DOI 1309
Animals
See also Good Practice in Large-Scale Learning for Image Classification. BibRef

Paulin, M.[Mattis], Revaud, J.[Jerome], Harchaoui, Z.[Zaid], Perronnin, F.[Florent], Schmid, C.[Cordelia],
Transformation Pursuit for Image Classification,
CVPR14(3646-3653)
IEEE DOI 1409
Augment training with transformed images. BibRef

Chatzilari, E.[Elisavet], Nikolopoulos, S.[Spiros], Kompatsiaris, I.[Ioannis], Kittler, J.V.[Josef V.],
SALIC: Social Active Learning for Image Classification,
MultMed(18), No. 8, August 2016, pp. 1488-1503.
IEEE DOI 1608
computational complexity. Select appropriate user tagged image for training set. BibRef

Yao, Y., Zhang, J., Shen, F., Hua, X., Xu, J., Tang, Z.,
Exploiting Web Images for Dataset Construction: A Domain Robust Approach,
MultMed(19), No. 8, August 2017, pp. 1771-1784.
IEEE DOI 1708
Google, Labeling, Manuals, Noise measurement, Robustness, Search engines, Visualization, Domain robust, image dataset construction, multi-instance learning (MIL), multiple, query, expansions BibRef

Rahmani, M.[Mostafa], Atia, G.K.[George K.],
Spatial Random Sampling: A Structure-Preserving Data Sketching Tool,
SPLetters(24), No. 9, September 2017, pp. 1398-1402.
IEEE DOI 1708
Approximation algorithms, Clustering algorithms, Complexity theory, Signal processing algorithms, Sociology, Statistics, Tools, Big data, clustering, column sampling, data sketching, random embedding, unit, sphere BibRef

Liu, H.F.[Hong-Fu], Tao, Z.Q.[Zhi-Qiang], Fu, Y.[Yun],
Partition Level Constrained Clustering,
PAMI(40), No. 10, October 2018, pp. 2469-2483.
IEEE DOI 1809
A small proportion of the data is given labels to guide the clustering. Clustering algorithms, Partitioning algorithms, Noise measurement, Robustness, Algorithm design and analysis, cosegmentation BibRef

Zhang, J.J.[Jia-Jia], Shao, K.[Kun], Luo, X.[Xing],
Small Sample Image Recognition Using Improved Convolutional Neural Network,
JVCIR(55), 2018, pp. 640-647.
Elsevier DOI 1809
Image recognition, Convolutional Neural Network (CNN), General Regression Neural Network (GRNN), Small sample, Real-time BibRef

Vakhitov, A.[Alexander], Kuzmin, A.[Andrey], Lempitsky, V.[Victor],
Set2Model Networks: Learning Discriminatively to Learn Generative Models,
CVIU(173), 2018, pp. 13-23.
Elsevier DOI 1901
BibRef
Earlier: WSM17(357-366)
IEEE DOI 1802
Learning-to-learn, Deep learning, Internet-based Image retrieval, ImageNet. Noisy training data, no negative examples. Computational modeling, Internet, Probabilistic logic, Search engines, Training, Visualization BibRef

Zachariah, D., Stoica, P.,
Effect Inference From Two-Group Data With Sampling Bias,
SPLetters(26), No. 8, August 2019, pp. 1103-1106.
IEEE DOI 1908
data analysis, inference mechanisms, least squares approximations, sampling methods, two-sample test BibRef

Yin, Q.B.[Qing-Bo], Adeli, E.[Ehsan], Shen, L.[Liran], Shen, D.G.[Ding-Gang],
Population-guided large margin classifier for high-dimension low-sample-size problems,
PR(97), 2020, pp. 107030.
Elsevier DOI 1910
Binary linear classifier, Data piling, High-dimension lowsample-size, Hyperplane, Local structure information BibRef

Shen, L.[Liran], Er, M.J.[Meng Joo], Yin, Q.B.[Qing-Bo],
Classification for high-dimension low-sample size data,
PR(130), 2022, pp. 108828.
Elsevier DOI 2206
Binary linear classifier, Quadratic programming, Data piling, Covariance matrix BibRef

Mishra, S., Yamasaki, T., Imaizumi, H.,
Improving image classifiers for small datasets by learning rate adaptations,
MVA19(1-6)
DOI Link 1911
image classification, learning (artificial intelligence), training time, nearing state, learning rate, datasets, 5G mobile communication BibRef

Mithun, N.C.[Niluthpol Chowdhury], Panda, R.[Rameswar], Roy-Chowdhury, A.K.[Amit K.],
Construction of Diverse Image Datasets From Web Collections With Limited Labeling,
CirSysVideo(30), No. 4, April 2020, pp. 1147-1161.
IEEE DOI 2004
Visualization, Labeling, Manuals, Image coding, Training, Google, Search engines, Image dataset, active learning, joint image-text analysis BibRef

Gong, C.[Chen], Shi, H.[Hong], Yang, J.[Jie], Yang, J.[Jian],
Multi-Manifold Positive and Unlabeled Learning for Visual Analysis,
CirSysVideo(30), No. 5, May 2020, pp. 1396-1409.
IEEE DOI 2005
Training with only positive samples. Manifolds, Training, Image recognition, Face, Hyperspectral imaging, Semisupervised learning, positive confidence training BibRef

Sellars, P.[Philip], Aviles-Rivero, A.I.[Angelica I.], Schönlieb, C.B.[Carola B.],
Superpixel Contracted Graph-Based Learning for Hyperspectral Image Classification,
GeoRS(58), No. 6, June 2020, pp. 4180-4193.
IEEE DOI 2005
From limited labelled data. Graph-based methods, hyperspectral image (HSI) classification, semi-supervised learning (SSL), superpixels BibRef

Hao, Q., Li, S., Kang, X.,
Multilabel Sample Augmentation-Based Hyperspectral Image Classification,
GeoRS(58), No. 6, June 2020, pp. 4263-4278.
IEEE DOI 2005
Classifier, hyperspectral image classification, multilabel samples, sample augmentation BibRef

Li, H.Z.[Hong-Zhu], Wang, W.Q.[Wei-Qiang],
Reinterpreting CTC training as iterative fitting,
PR(105), 2020, pp. 107392.
Elsevier DOI 2006
Connectionist temporal classification (CTC) BibRef

Besson, O.,
Adaptive Detection Using Whitened Data When Some of the Training Samples Undergo Covariance Mismatch,
SPLetters(27), 2020, pp. 795-799.
IEEE DOI 2006
Adaptive detection, covariance mismatch, generalized likelihood ratio test, Student distribution BibRef

Yalcinkaya, O.[Ozge], Golge, E.[Eren], Duygulu, P.[Pinar],
I-ME: iterative model evolution for learning from weakly labeled images and videos,
MVA(31), No. 5, July 2020, pp. Article40.
WWW Link. 2006
BibRef
Earlier: A2, A3, Only:
ConceptMap: Mining Noisy Web Data for Concept Learning,
ECCV14(VII: 439-455).
Springer DOI 1408
Use web search for training to learn concepts BibRef

Li, X., Chang, D., Ma, Z., Tan, Z., Xue, J., Cao, J., Yu, J., Guo, J.,
OSLNet: Deep Small-Sample Classification With an Orthogonal Softmax Layer,
IP(29), 2020, pp. 6482-6495.
IEEE DOI 2007
Training, Optimization, Training data, Deep learning, Decorrelation, Biological neural networks, Deep neural network, small-sample classification BibRef

Syrris, V.[Vasileios], Pesek, O.[Ondrej], Soille, P.[Pierre],
SatImNet: Structured and Harmonised Training Data for Enhanced Satellite Imagery Classification,
RS(12), No. 20, 2020, pp. xx-yy.
DOI Link 2010
BibRef

Khan, A.[Aparajita], Maji, P.[Pradipta],
Approximate Graph Laplacians for Multimodal Data Clustering,
PAMI(43), No. 3, March 2021, pp. 798-813.
IEEE DOI 2102
Laplace equations, Clustering algorithms, Cancer, Approximation algorithms, Eigenvalues and eigenfunctions, matrix perturbation theory BibRef

Mandal, A.[Ankita], Maji, P.[Pradipta],
A New Method to Address Singularity Problem in Multimodal Data Analysis,
PReMI17(43-51).
Springer DOI 1711
Small sample set, large feature set. BibRef

Gong, C.[Chen], Shi, H.[Hong], Liu, T.L.[Tong-Liang], Zhang, C.[Chuang], Yang, J.[Jian], Tao, D.C.[Da-Cheng],
Loss Decomposition and Centroid Estimation for Positive and Unlabeled Learning,
PAMI(43), No. 3, March 2021, pp. 918-932.
IEEE DOI 2102
Estimation, Training, Supervised learning, Noise measurement, Analytical models, Risk management, Kernel, PU learning, generalization bound BibRef

Ramezan, C.A.[Christopher A.], Warner, T.A.[Timothy A.], Maxwell, A.E.[Aaron E.], Price, B.S.[Bradley S.],
Effects of Training Set Size on Supervised Machine-Learning Land-Cover Classification of Large-Area High-Resolution Remotely Sensed Data,
RS(13), No. 3, 2021, pp. xx-yy.
DOI Link 2102
BibRef

Li, Y.[Yang], Zhao, Z.Q.[Zhi-Qun], Sun, H.[Hao], Cen, Y.G.[Yi-Gang], He, Z.H.[Zhi-Hai],
Snowball: Iterative Model Evolution and Confident Sample Discovery for Semi-Supervised Learning on Very Small Labeled Datasets,
MultMed(23), 2021, pp. 1354-1366.
IEEE DOI 2105
Training, Semisupervised learning, Predictive models, Error analysis, Task analysis, Entropy, Knowledge engineering, classification BibRef

Li, J.J.[Jun-Jie], Meng, L.K.[Ling-Kui], Yang, B.B.[Bei-Bei], Tao, C.X.[Chong-Xin], Li, L.Y.[Lin-Yi], Zhang, W.[Wen],
LabelRS: An Automated Toolbox to Make Deep Learning Samples from Remote Sensing Images,
RS(13), No. 11, 2021, pp. xx-yy.
DOI Link 2106
BibRef

Lartigue, T.[Thomas], Bottani, S.[Simona], Baron, S.[Stéphanie], Colliot, O.[Olivier], Durrleman, S.[Stanley], Allassonnière, S.[Stéphanie],
Gaussian Graphical Model Exploration and Selection in High Dimension Low Sample Size Setting,
PAMI(43), No. 9, September 2021, pp. 3196-3213.
IEEE DOI 2108
Correlation, Covariance matrices, Measurement, Graphical models, Gaussian distribution, Sparse representation, maximum likelihood estimation BibRef

Scmitsu, T.[Takayuki], Nakamura, M.[Mitsuki], Ishigami, S.[Shotaro], Aoki, T.[Toru], Lee, T.Y.[Teng-Yok], Isu, Y.[Yoshimi],
Estimating Contribution of Training Datasets using Shapley Values in Data-scale for Visual Recognition,
MVA21(1-5)
DOI Link 2109
Training, Measurement, Visualization, Additives, Task analysis BibRef

Ye, H.J.[Han-Jia], Zhan, D.C.[De-Chuan], Jiang, Y.[Yuan], Zhou, Z.H.[Zhi-Hua],
Heterogeneous Few-Shot Model Rectification With Semantic Mapping,
PAMI(43), No. 11, November 2021, pp. 3878-3891.
IEEE DOI 2110
Task analysis, Adaptation models, Predictive models, Data models, Training, Semantics, Robustness, Model reuse, learnware BibRef

Liu, J.X.[Jin-Xiang], Zhang, K.F.[Ke-Fei], Wu, S.Q.[Su-Qin], Shi, H.T.[Hong-Tao], Zhao, Y.D.[Yin-Di], Sun, Y.Q.[Ya-Qin], Zhuang, H.F.[Hui-Fu], Fu, E.[Erjiang],
An Investigation of a Multidimensional CNN Combined with an Attention Mechanism Model to Resolve Small-Sample Problems in Hyperspectral Image Classification,
RS(14), No. 3, 2022, pp. xx-yy.
DOI Link 2202
BibRef

Qi, G.J.[Guo-Jun], Luo, J.B.[Jie-Bo],
Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods,
PAMI(44), No. 4, April 2022, pp. 2168-2187.
IEEE DOI 2203
Data models, Task analysis, Training, Adaptation models, Big Data, Supervised learning, Unsupervised methods, instance discrimination and equivariance BibRef

Gao, K.L.[Kui-Liang], Liu, B.[Bing], Yu, X.[Xuchu], Yu, A.[Anzhu],
Unsupervised Meta Learning With Multiview Constraints for Hyperspectral Image Small Sample set Classification,
IP(31), 2022, pp. 3449-3462.
IEEE DOI 2205
Learning systems, Task analysis, Training, Feature extraction, Deep learning, Data models BibRef

Gao, K.L.[Kui-Liang], Yu, A.[Anzhu], You, X.[Xiong], Qiu, C.P.[Chun-Ping], Liu, B.[Bing], Guo, W.[Wenyue],
Learning General-Purpose Representations for Cross-Domain Hyperspectral Images Classification with Small Samples,
RS(15), No. 4, 2023, pp. xx-yy.
DOI Link 2303
BibRef

Kleinewillinghöfer, L.[Luca], Olofsson, P.[Pontus], Pebesma, E.[Edzer], Meyer, H.[Hanna], Buck, O.[Oliver], Haub, C.[Carsten], Eiselt, B.[Beatrice],
Unbiased Area Estimation Using Copernicus High Resolution Layers and Reference Data,
RS(14), No. 19, 2022, pp. xx-yy.
DOI Link 2210
Getting accurate spectral info. BibRef

Sun, C.H.[Cai-Hao], Zhang, X.H.[Xiao-Hua], Meng, H.Y.[Hong-Yun], Cao, X.H.[Xiang-Hai], Zhang, J.H.[Jin-Hua],
AC-WGAN-GP: Generating Labeled Samples for Improving Hyperspectral Image Classification with Small-Samples,
RS(14), No. 19, 2022, pp. xx-yy.
DOI Link 2210
BibRef

Ye, H.[Hang], Wang, Y.L.[Yong-Liang], Liu, W.J.[Wei-Jian], Liu, J.[Jun], Chen, H.[Hui],
Adaptive Detection in Partially Homogeneous Environment with Limited Samples Based on Geometric Barycenters,
SPLetters(29), 2022, pp. 2083-2087.
IEEE DOI 2211
Detectors, Training, Training data, Clutter, Statistical distributions, Radar, Maximum likelihood estimation, limited samples BibRef

Shifat-E-Rabbi, M.[Mohammad], Zhuang, Y.[Yan], Li, S.Y.[Shi-Ying], Rubaiyat, A.H.M.[Abu Hasnat Mohammad], Yin, X.[Xuwang], Rohde, G.K.[Gustavo K.],
Invariance encoding in sliced-Wasserstein space for image classification with limited training data,
PR(137), 2023, pp. 109268.
Elsevier DOI 2302
R-CDT, Mathematical model, Generative model, Invariance learning BibRef

Ren, Y.T.[Yi-Tao], Jin, P.Y.[Pei-Yang], Li, Y.Y.[Yi-Yang], Mao, K.M.[Ke-Ming],
An efficient hyperspectral image classification method for limited training data,
IET-IPR(17), No. 6, 2023, pp. 1709-1717.
DOI Link 2305
convolutional neural nets, hyperspectral imaging, neural nets BibRef

Zhong, W.Y.[Wen-Yuan], Li, H.X.[Hua-Xiong], Hu, Q.H.[Qing-Hua], Gao, Y.[Yang], Chen, C.L.[Chun-Lin],
Multi-Level Cascade Sparse Representation Learning for Small Data Classification,
CirSysVideo(33), No. 5, May 2023, pp. 2451-2464.
IEEE DOI 2305
Training, Feature extraction, Visual databases, Sparse matrices, Faces, Data visualization, Symbols, Deep cascade, small data BibRef

Chu, J.L.[Jie-Lei], Liu, J.[Jing], Wang, H.J.[Hong-Jun], Meng, H.[Hua], Gong, Z.G.[Zhi-Guo], Li, T.R.[Tian-Rui],
Micro-Supervised Disturbance Learning: A Perspective of Representation Probability Distribution,
PAMI(45), No. 6, June 2023, pp. 7542-7558.
IEEE DOI 2305
Representation learning, Probability distribution, Data models, Feature extraction, Semisupervised learning, Stability analysis, small-perturbation BibRef

Nápoles, G.[Gonzalo], Grau, I.[Isel], Jastrzebska, A.[Agnieszka], Salgueiro, Y.[Yamisleydi],
Presumably correct decision sets,
PR(141), 2023, pp. 109640.
Elsevier DOI 2306
Data analysis, Granular computing, Decision sets, Rough sets BibRef

Liu, W.J.[Wei-Jian], Liu, J.[Jun], Liu, T.[Tao], Chen, H.[Hui], Wang, Y.L.[Yong-Liang],
Detector Design and Performance Analysis for Target Detection in Subspace Interference,
SPLetters(30), 2023, pp. 618-622.
IEEE DOI 2306
Detectors, Interference, Training data, Covariance matrices, Training, Statistical distributions, Signal detection, subspace interference BibRef

Tsutsui, S.[Satoshi], Fu, Y.W.[Yan-Wei], Crandall, D.[David],
Reinforcing Generated Images via Meta-Learning for One-Shot Fine-Grained Visual Recognition,
PAMI(46), No. 3, March 2024, pp. 1455-1463.
IEEE DOI 2402
Training, Generators, Image recognition, Visualization, Tuning, Training data, Task analysis, Fine-grained visual recognition, meta-learning BibRef

Bai, C.[Can], Han, X.J.[Xian-Jun],
MRFormer: Multiscale retractable transformer for medical image progressive denoising via noise level estimation,
IVC(144), 2024, pp. 104974.
Elsevier DOI 2404
Medical image processing, Noise level estimation, Progressive denoising, Denoising model BibRef

Wei, J.[Jiwei], Yang, Y.[Yang], Guan, X.[Xiang], Xu, X.[Xing], Wang, G.Q.[Guo-Qing], Shen, H.T.[Heng Tao],
Runge-Kutta Guided Feature Augmentation for Few-Sample Learning,
MultMed(26), 2024, pp. 7349-7358.
IEEE DOI 2405
Feature extraction, Training, Task analysis, Numerical models, Visualization, Semantics, Training data, Runge-Kutta method, few-sample learning BibRef

Liang, K.M.[Kong-Ming], Yin, Z.J.[Zi-Jin], Min, M.[Min], Liu, Y.[Yan], Ma, Z.Y.[Zhan-Yu], Guo, J.[Jun],
Learning Dynamic Prototypes for Visual Pattern Debiasing,
IJCV(132), No. 5, May 2024, pp. 1777-1799.
Springer DOI 2405
Deal with biased datasets in learning. BibRef

Liu, C.[Chang], Mittal, G.[Gaurav], Karianakis, N.[Nikolaos], Fragoso, V.[Victor], Yu, Y.[Ye], Fu, Y.[Yun], Chen, M.[Mei],
HyperSTAR: Task-Aware Hyperparameter Recommendation for Training and Compression,
IJCV(132), No. 6, June 2024, pp. 1913-1927.
Springer DOI 2406
Task aware parameter recommendations. BibRef

Yang, Z.[Zhen], Ding, M.[Ming], Huang, T.L.[Ting-Lin], Cen, Y.[Yukuo], Song, J.S.[Jun-Shuai], Xu, B.[Bin], Dong, Y.X.[Yu-Xiao], Tang, J.[Jie],
Does Negative Sampling Matter? a Review With Insights Into its Theory and Applications,
PAMI(46), No. 8, August 2024, pp. 5692-5711.
IEEE DOI 2407
Training, Sampling methods, Vocabulary, Surveys, Computational modeling, Task analysis, Self-supervised learning, negative sampling framework BibRef

Chen, C.R.[Chang-Rui], Han, J.G.[Jun-Gong], Debattista, K.[Kurt],
Virtual Category Learning: A Semi-Supervised Learning Method for Dense Prediction With Extremely Limited Labels,
PAMI(46), No. 8, August 2024, pp. 5595-5611.
IEEE DOI 2407
Task analysis, Training, Semantic segmentation, Labeling, Semisupervised learning, Object detection, Filtering. BibRef

Zhang, L.[Lin], Song, R.[Ran], Tan, W.H.[Wen-Hao], Ma, L.[Lin], Zhang, W.[Wei],
IGCN: A Provably Informative GCN Embedding for Semi-Supervised Learning With Extremely Limited Labels,
PAMI(46), No. 12, December 2024, pp. 8396-8409.
IEEE DOI 2411
Mutual information, Semisupervised learning, Symmetric matrices, Convolution, Training, Laplace equations, Task analysis, limited labels BibRef


DeAlcala, D.[Daniel], Mancera, G.[Gonzalo], Morales, A.[Aythami], Fierrez, J.[Julian], Tolosana, R.[Ruben], Ortega-Garcia, J.[Javier],
A Comprehensive Analysis of Factors Impacting Membership Inference,
SAIAD24(3585-3593)
IEEE DOI 2410
Training, Analytical models, Accuracy, Protocols, Databases, Face recognition, Biological system modeling, MIA BibRef

Wang, H.J.[Han-Jing], Ji, Q.[Qiang],
Epistemic Uncertainty Quantification for Pretrained Neural Networks,
CVPR24(11052-11061)
IEEE DOI 2410
Where models lack knowledge. Training, Analytical models, Uncertainty, Computational modeling, Perturbation methods, Neural networks, Training data BibRef

Myers, A.[Adele], Miolane, N.[Nina],
On Accuracy and Speed of Geodesic Regression: Do Geometric Priors Improve Learning on Small Datasets?,
L3D-IVU24(2714-2722)
IEEE DOI 2410
Manifolds, Training, Learning systems, Accuracy, Computational modeling, geodesic regression, linear regression, manifolds BibRef

Zhang, Z.L.[Ze-Liang], Feng, M.Q.[Ming-Qian], Li, Z.H.[Zhi-Heng], Xu, C.L.[Chen-Liang],
Discover and Mitigate Multiple Biased Subgroups in Image Classifiers,
CVPR24(10906-10915)
IEEE DOI Code:
WWW Link. 2410
Training, Dimensionality reduction, Correlation, Prevention and mitigation, Semantics, Natural languages, Training data BibRef

Shi, L.[Luyao], Vijayaraghavan, P.[Prashanth], Degan, E.[Ehsan],
Data-free Model Fusion with Generator Assistants,
ZeroShot24(7731-7739)
IEEE DOI 2410
Training, Art, Fuses, Neural networks, Training data, Dogs, Generators, Data Free, Model Fusion, Knowledge Amalgamation, Model Merging BibRef

Sudhakar, S.[Sruthi], Hanzelka, J.[Jon], Bobillot, J.[Josh], Randhavane, T.[Tanmay], Joshi, N.[Neel], Vineet, V.[Vibhav],
Exploring the Sim2Real Gap using Digital Twins,
ICCV23(20361-20370)
IEEE DOI 2401
Simulated data. BibRef

Hong, C.[Chunsan], Cha, B.[Byunghee], Kim, B.H.[Bo-Hyung], Oh, T.H.[Tae-Hyun],
Enhancing Classification Accuracy on Limited Data via Unconditional GAN,
LIMIT23(1049-1057)
IEEE DOI 2401
BibRef

Matey-Sanz, M.[Miguel], Torres-Sospedra, J.[Joaquín], González-Pérez, A.[Alberto], Casteleyn, S.[Sven], Granell, C.[Carlos],
Analysis and Impact of Training Set Size in Cross-subject Human Activity Recognition,
CIARP23(I:391-405).
Springer DOI 2312
BibRef

Feng, W.[Wei], Gao, X.T.[Xin-Ting], Dauphin, G.[Gabriel], Quan, Y.H.[Ying-Hui],
Rotation XGBoost Based Method for Hyperspectral Image Classification with Limited Training Samples,
ICIP23(900-904)
IEEE DOI 2312
BibRef

Jeong, J.[Jongheon], Yu, S.[Sihyun], Lee, H.[Hankook], Shin, J.[Jinwoo],
Enhancing Multiple Reliability Measures via Nuisance-Extended Information Bottleneck,
CVPR23(16206-16218)
IEEE DOI 2309

WWW Link. BibRef

Patel, D.[Deep], Sastry, P.S.,
Adaptive Sample Selection for Robust Learning under Label Noise,
WACV23(3921-3931)
IEEE DOI 2302
Training, Knowledge engineering, Deep learning, Heuristic algorithms, Neural networks, Benchmark testing, and algorithms (including transfer) BibRef

Vanderschueren, A.[Antoine], de Vleeschouwer, C.[Christophe],
Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?,
WACV23(3797-3806)
IEEE DOI 2302
Training, Neural networks, Estimation, Turning, Computational complexity, Applications: Embedded sensing/real-time techniques BibRef

Deng, S.Q.[Si-Qi], Xiong, Y.J.[Yuan-Jun], Wang, M.[Meng], Xia, W.[Wei], Soatto, S.[Stefano],
Harnessing Unrecognizable Faces for Improving Face Recognition,
WACV23(3413-3422)
IEEE DOI 2302
Image recognition, Quantization (signal), Error analysis, Face recognition, Neural networks, Lighting, Detectors, body pose BibRef

Cheng, M.[Miao], You, X.G.[Xin-Ge],
Leachable Component Clustering,
ICPR22(1858-1864)
IEEE DOI 2212
Dealing with incomplete training data. Data handling, Estimation, Clustering algorithms, Information processing, Mathematical models, Data models, calculation efficiency BibRef

Wad, T.[Tan], Sun, Q.[Qianru], Pranata, S.[Sugiri], Jayashree, K.[Karlekar], Zhang, H.W.[Han-Wang],
Equivariance and Invariance Inductive Bias for Learning from Insufficient Data,
ECCV22(XI:241-258).
Springer DOI 2211
BibRef

Wang, K.[Kai], Zhao, B.[Bo], Peng, X.Y.[Xiang-Yu], Zhu, Z.[Zheng], Yang, S.[Shuo], Wang, S.[Shuo], Huang, G.[Guan], Bilen, H.[Hakan], Wang, X.C.[Xin-Chao], You, Y.[Yang],
CAFE: Learning to Condense Dataset by Aligning Features,
CVPR22(12186-12195)
IEEE DOI 2210
Training, Heart, Costs, Performance gain, Efficient learning and inferences, Image and video synthesis and generation BibRef

Lokhande, V.S.[Vishnu Suresh], Chakraborty, R.[Rudrasis], Ravi, S.N.[Sathya N.], Singh, V.[Vikas],
Equivariance Allows Handling Multiple Nuisance Variables When Analyzing Pooled Neuroimaging Datasets,
CVPR22(10422-10431)
IEEE DOI 2210
Pool multiple datasets, especially disease data. Representation learning, Neuroimaging, Codes, Atmospheric measurements, Neural networks, Particle measurements, Statistical methods BibRef

Mahmood, R.[Rafid], Lucas, J.[James], Acuna, D.[David], Li, D.Q.[Dai-Qing], Philion, J.[Jonah], Alvarez, J.M.[Jose M.], Yu, Z.D.[Zhi-Ding], Fidler, S.[Sania], Law, M.T.[Marc T.],
How Much More Data Do I Need? Estimating Requirements for Downstream Tasks,
CVPR22(275-284)
IEEE DOI 2210
Costs, Data acquisition, Training data, Estimation, Machine learning, Machine learning, Datasets and evaluation, retrieval BibRef

Lemmer, S.J.[Stephan J.], Corso, J.J.[Jason J.],
Ground-Truth or DAER: Selective Re-Query of Secondary Information,
ICCV21(683-694)
IEEE DOI 2203
Training, Degradation, Crowdsourcing, Visualization, Computational modeling, Estimation, Vision + other modalities, Machine learning architectures and formulations BibRef

Kim, T.S.[Tae Soo], Shim, B.[Bohoon], Peven, M.[Michael], Qiu, W.C.[Wei-Chao], Yuille, A.L.[Alan L.], Hager, G.D.[Gregory D.],
Learning from Synthetic Vehicles,
RWSurvil22(500-508)
IEEE DOI 2202

WWW Link. For training vehicle recognition systems. Image recognition, Error analysis, Conferences, Multitasking, Task analysis BibRef

Kataoka, H.[Hirokatsu], Matsumoto, A.[Asato], Yamada, R.[Ryosuke], Satoh, Y.[Yutaka], Yamagata, E.[Eisuke], Inoue, N.[Nakamasa],
Formula-driven Supervised Learning with Recursive Tiling Patterns,
HTCV21(4081-4088)
IEEE DOI 2112
Trained without real data. Training, Visualization, Supervised learning, Image representation, Feature extraction BibRef

Kim, Y.D.[Young-Dong], Yun, J.[Juseung], Shon, H.[Hyounguk], Kim, J.[Junmo],
Joint Negative and Positive Learning for Noisy Labels,
CVPR21(9437-9446)
IEEE DOI 2111
Training, Costs, Filtering, Pipelines, Training data BibRef

Jia, R.X.[Ruo-Xi], Wu, F.[Fan], Sun, X.[Xuehui], Xu, J.C.[Jia-Cen], Dao, D.[David], Kailkhura, B.[Bhavya], Zhang, C.[Ce], Li, B.[Bo], Song, D.[Dawn],
Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?,
CVPR21(8235-8243)
IEEE DOI 2111
Training, Runtime, Scalability, Data acquisition, Watermarking, Machine learning BibRef

Liu, W.Y.[Wei-Yang], Lin, R.M.[Rong-Mei], Liu, Z.[Zhen], Rehg, J.M.[James M.], Paull, L.[Liam], Xiong, L.[Li], Song, L.[Le], Weller, A.[Adrian],
Orthogonal Over-Parameterized Training,
CVPR21(7247-7256)
IEEE DOI 2111
Training, Scalability, Neurons, Optimized production technology BibRef

Cole, E.[Elijah], Aodha, O.M.[Oisin Mac], Lorieul, T.[Titouan], Perona, P.[Pietro], Morris, D.[Dan], Jojic, N.[Nebojsa],
Multi-Label Learning from Single Positive Labels,
CVPR21(933-942)
IEEE DOI 2111
Training, Image resolution, Annotations, Training data, Standards BibRef

Hara, K.[Kensho], Ishikawa, Y.[Yuchi], Kataoka, H.[Hirokatsu],
Rethinking Training Data for Mitigating Representation Biases in Action Recognition,
HVU21(3344-3348)
IEEE DOI 2109
Training, Solid modeling, Computational modeling, Dynamics, Training data, Data models BibRef

Bisla, D.[Devansh], Saridena, A.N.[Apoorva Nandini], Choromanska, A.[Anna],
A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs,
TCV21(3264-3274)
IEEE DOI 2109
How error relates to sample size in deep learning. Training, Computational modeling, Measurement uncertainty, Statistical learning, Neural networks, Training data, Safety BibRef

Azuri, I.[Idan], Weinshall, D.[Daphna],
Generative Latent Implicit Conditional Optimization when Learning from Small Sample,
ICPR21(8584-8591)
IEEE DOI 2105
Training, Interpolation, Generators, Optimization, Image classification BibRef

Lokhande, V.S.[Vishnu Suresh], Akash, A.K.[Aditya Kumar], Ravi, S.N.[Sathya N.], Singh, V.[Vikas],
FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret,
ECCV20(XII: 365-381).
Springer DOI 2010
Deal with bias BibRef

Raisi, E.[Elaheh], Bach, S.H.[Stephen H.],
Selecting Auxiliary Data Using Knowledge Graphs for Image Classification with Limited Labels,
VL3W20(4026-4031)
IEEE DOI 2008
Train with not enough data. Task analysis, Training, Visualization, Neural networks, Error analysis, Computational modeling, Semantics BibRef

Wang, Y.[Yang], Cao, Y.[Yang], Zha, Z.J.[Zheng-Jun], Zhang, J.[Jing], Xiong, Z.W.[Zhi-Wei],
Deep Degradation Prior for Low-Quality Image Classification,
CVPR20(11046-11055)
IEEE DOI 2008
The images are low-quality. Degradation, Frequency division multiplexing, Visualization, Feature extraction, Semantics, Training, Task analysis BibRef

Zhang, Z.Z.[Zi-Zhao], Zhang, H.[Han], Arik, S.Ö.[Sercan Ö.], Lee, H.L.[Hong-Lak], Pfister, T.[Tomas],
Distilling Effective Supervision From Severe Label Noise,
CVPR20(9291-9300)
IEEE DOI 2008
Training, Noise measurement, Noise robustness, Labeling, Neural networks, Training data, Data models BibRef

Rao, R., Rao, S., Nouri, E., Dey, D., Celikyilmaz, A., Dolan, B.,
Quality and Relevance Metrics for Selection of Multimodal Pretraining Data,
MULWS20(4109-4116)
IEEE DOI 2008
Task analysis, Measurement, Visualization, Data models, Training, Tagging, Twitter BibRef

Yang, D., Deng, J.,
Learning to Generate 3D Training Data Through Hybrid Gradient,
CVPR20(776-786)
IEEE DOI 2008
Training, Shape, Training data, Optimization, Pipelines, Task analysis BibRef

Mandal, D.[Devraj], Bharadwaj, S.[Shrisha], Biswas, S.[Soma],
A Novel Self-Supervised Re-labeling Approach for Training with Noisy Labels,
WACV20(1370-1379)
IEEE DOI 2006
mCT-S2R (modified co-teaching with self-supervision and relabeling). Training, Task analysis, Noise measurement, Data models, Training data, Computational modeling, Robustness BibRef

Li, Y.[Yi], Vasconcelos, N.M.[Nuno M.],
REPAIR: Removing Representation Bias by Dataset Resampling,
CVPR19(9564-9573).
IEEE DOI 2002
BibRef

Cui, Y.[Yin], Jia, M.L.[Meng-Lin], Lin, T.Y.[Tsung-Yi], Song, Y.[Yang], Belongie, S.[Serge],
Class-Balanced Loss Based on Effective Number of Samples,
CVPR19(9260-9269).
IEEE DOI 2002
BibRef

Tanno, R.[Ryutaro], Saeedi, A.[Ardavan], Sankaranarayanan, S.[Swami], Alexander, D.C.[Daniel C.], Silberman, N.[Nathan],
Learning From Noisy Labels by Regularized Estimation of Annotator Confusion,
CVPR19(11236-11245).
IEEE DOI 2002
BibRef

Teney, D.[Damien], van den Hengel, A.J.[Anton J.],
Actively Seeking and Learning From Live Data,
CVPR19(1940-1949).
IEEE DOI 2002
BibRef

Dovrat, O.[Oren], Lang, I.[Itai], Avidan, S.[Shai],
Learning to Sample,
CVPR19(2755-2764).
IEEE DOI 2002
BibRef

Häufel, G., Bulatov, D., Helmholz, P.,
Statistical Analysis of Airborne Imagery Combined With GIS Information For Training Data Generation,
PIA19(111-118).
DOI Link 1912
BibRef

Deshpande, P.J., Sure, A., Dikshit, O., Tripathi, S.,
A Framework for Estimating Representative Area of a Ground Sample Using Remote Sensing,
IWIDF19(687-692).
DOI Link 1912
BibRef

Unceta, I.[Irene], Nin, J.[Jordi], Pujol, O.[Oriol],
Using Copies to Remove Sensitive Data: A Case Study on Fair Superhero Alignment Prediction,
IbPRIA19(I:182-193).
Springer DOI 1910
BibRef

Unceta, I.[Irene], Nin, J.[Jordi], Pujol, O.[Oriol],
Copying machine learning classifiers,
Online2019.
WWW Link. BibRef 1900

Ghosh, P.[Pallabi], Davis, L.S.[Larry S.],
Understanding Center Loss Based Network for Image Retrieval with Few Training Data,
WiCV-E18(IV:717-722).
Springer DOI 1905
BibRef

Alvi, M.[Mohsan], Zisserman, A.[Andrew], Nellåker, C.[Christoffer],
Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings,
BEFace18(I:556-572).
Springer DOI 1905
Training data biases. BibRef

Mundhenk, T.N.[T. Nathan], Ho, D.[Daniel], Chen, B.Y.[Barry Y.],
Improvements to Context Based Self-Supervised Learning,
CVPR18(9339-9348)
IEEE DOI 1812
Image color analysis, Task analysis, Jitter, Standards, Neural networks, Semantics, Network architecture BibRef

Wang, X.D.[Xu-Dong], Liu, Z.W.[Zi-Wei], Yu, S.X.[Stella X.],
Unsupervised Feature Learning by Cross-Level Instance-Group Discrimination,
CVPR21(12581-12590)
IEEE DOI 2111
Training, Codes, Transfer learning, Distributed databases, Benchmark testing BibRef

Wu, Z.R.[Zhi-Rong], Xiong, Y.J.[Yuan-Jun], Yu, S.X.[Stella X.], Lin, D.[Dahua],
Unsupervised Feature Learning via Non-parametric Instance Discrimination,
CVPR18(3733-3742)
IEEE DOI 1812
Use neural nets for unsupervised learning. Measurement, Task analysis, Training, Neural networks, Supervised learning, Testing, Visualization BibRef

Novotny, D.[David], Albanie, S.[Samuel], Larlus, D.[Diane], Vedaldi, A.[Andrea],
Self-Supervised Learning of Geometrically Stable Features Through Probabilistic Introspection,
CVPR18(3637-3645)
IEEE DOI 1812
Reduce training data Task analysis, Semantics, Probabilistic logic, Feature extraction, Neural networks, Visualization. BibRef

Keshari, R., Vatsa, M., Singh, R., Noore, A.,
Learning Structure and Strength of CNN Filters for Small Sample Size Training,
CVPR18(9349-9358)
IEEE DOI 1812
Dictionaries, Training, Databases, Training data, Feature extraction, Machine learning BibRef

Jiang, Z., Zhu, X., Tan, W.t., Liston, R.,
Training sample selection for deep learning of distributed data,
ICIP17(2189-2193)
IEEE DOI 1803
Bandwidth, Distributed databases, Machine learning, Measurement, Neural networks, Training, Training data, Deep neural networks, training sample selection BibRef

Rahmani, M.[Mostafa], Atia, G.K.[George K.],
Robust and Scalable Column/Row Sampling from Corrupted Big Data,
RSL-CV17(1818-1826)
IEEE DOI 1802
Algorithm design and analysis, Clustering algorithms, Data models, Robustness, Sparse matrices BibRef

Dixit, M.[Mandar], Kwitt, R.[Roland], Niethammer, M.[Marc], Vasconcelos, N.M.[Nuno M.],
AGA: Attribute-Guided Augmentation,
CVPR17(3328-3336)
IEEE DOI 1711
Generation of artificial samples for training data. Data models, Neural networks, Object recognition, Training. BibRef

Paul, S.[Sujoy], Bappy, J.H.[Jawadul H.], Roy-Chowdhury, A.K.[Amit K.],
Non-uniform Subset Selection for Active Learning in Structured Data,
CVPR17(830-839)
IEEE DOI 1711
Data models, Entropy, Feature extraction, Labeling, Manuals, Uncertainty BibRef

Bappy, J.H.[Jawadul H.], Paul, S.[Sujoy], Tuncel, E.[Ertem], Roy-Chowdhury, A.K.[Amit K.],
The Impact of Typicality for Informative Representative Selection,
CVPR17(771-780)
IEEE DOI 1711
Activity recognition, Computational modeling, Context modeling, Data models, Entropy. Selection of training samples. BibRef

Tejada, J.[Javier], Alexandrov, M.[Mikhail], Skitalinskaya, G.[Gabriella], Stefanovskiy, D.[Dmitry],
Selection of Statistically Representative Subset from a Large Data Set,
CIARP16(476-483).
Springer DOI 1703
BibRef

Canévet, O.[Olivier], Fleuret, F.[François],
Large Scale Hard Sample Mining with Monte Carlo Tree Search,
CVPR16(5128-5137)
IEEE DOI 1612
find false positives. BibRef

Gan, C.[Chuang], Yao, T.[Ting], Yang, K.Y.[Kui-Yuan], Yang, Y.[Yi], Mei, T.[Tao],
You Lead, We Exceed: Labor-Free Video Concept Learning by Jointly Exploiting Web Videos and Images,
CVPR16(923-932)
IEEE DOI 1612
BibRef

Masi, I.[Iacopo], Hassner, T.[Tal], Tran, A.T.[Anh Tuan], Medioni, G.[Gérard],
Rapid Synthesis of Massive Face Sets for Improved Face Recognition,
FG17(604-611)
IEEE DOI 1707
Cameras, Face, Face recognition, Rendering (computer graphics), Shape, Standards, BibRef

Masi, I.[Iacopo], Tran, A.T.[Anh Tuan], Hassner, T.[Tal], Leksut, J.T.[Jatuporn Toy], Medioni, G.[Gérard],
Do We Really Need to Collect Millions of Faces for Effective Face Recognition?,
ECCV16(V: 579-596).
Springer DOI 1611
BibRef

Richter, S.R.[Stephan R.], Vineet, V.[Vibhav], Roth, S.[Stefan], Koltun, V.[Vladlen],
Playing for Data: Ground Truth from Computer Games,
ECCV16(II: 102-118).
Springer DOI 1611
Using game images for data. BibRef

Choi, M.K., Lee, H.G., Lee, S.C.,
Weighted SVM with classification uncertainty for small training samples,
ICIP16(4438-4442)
IEEE DOI 1610
Machine vision BibRef

Pourashraf, P.[Payam], Tomuro, N.[Noriko],
Use of a Large Image Repository to Enhance Domain Dataset for Flyer Classification,
ISVC15(II: 609-617).
Springer DOI 1601
issue of adding other images to training set. BibRef

Tommasi, T.[Tatiana], Patricia, N.[Novi], Caputo, B.[Barbara], Tuytelaars, T.[Tinne],
A Deeper Look at Dataset Bias,
GCPR15(504-516).
Springer DOI 1511
BibRef

Psutka, J.V.[Josef V.], Psutka, J.[Josef],
Sample Size for Maximum Likelihood Estimates of Gaussian Model,
CAIP15(II:462-469).
Springer DOI 1511
BibRef

Abdulhak, S.A.[Sami Abduljalil], Riviera, W.[Walter], Cristani, M.[Marco],
Crowdsearching Training Sets for Image Classification,
CIAP15(I:192-202).
Springer DOI 1511
BibRef

Xiao, T.[Tong], Xia, T.[Tian], Yang, Y.[Yi], Huang, C.[Chang], Wang, X.G.[Xiao-Gang],
Learning from massive noisy labeled data for image classification,
CVPR15(2691-2699)
IEEE DOI 1510
Obtain large scale labelled data. BibRef

Shapovalov, R.[Roman], Vetrov, D.[Dmitry], Osokin, A.[Anton], Kohli, P.[Pushmeet],
Multi-utility Learning: Structured-Output Learning with Multiple Annotation-Specific Loss Functions,
EMMCVPR15(406-420).
Springer DOI 1504
Difficult to get needed fully labelled datasets. Use weak annotation. BibRef

Schoeler, M.[Markus], Worgotter, F.[Florentin], Kulvicius, T.[Tomas], Papon, J.[Jeremie],
Unsupervised Generation of Context-Relevant Training-Sets for Visual Object Recognition Employing Multilinguality,
WACV15(805-812)
IEEE DOI 1503
Clutter; Context; Fasteners; Google; Nails; Search engines; Training BibRef

Desai, C.[Chaitanya], Eledath, J.[Jayan], Sawhney, H.[Harpreet], Bansal, M.[Mayank],
De-correlating CNN Features for Generative Classification,
WACV15(428-435)
IEEE DOI 1503
Accuracy. Training with positive examples, no specific negatives. BibRef

Chatzilari, E.[Elisavet], Nikolopoulos, S.[Spiros], Kompatsiaris, Y.F.[Yi-Fannis], Kittler, J.V.[Josef V.],
How many more images do we need? Performance prediction of bootstrapping for image classification,
ICIP14(4256-4260)
IEEE DOI 1502
Computational modeling BibRef

Spehr, M.[Marcel], Grottel, S.[Sebastian], Gumhold, S.[Stefan],
Wifbs: A Web-Based Image Feature Benchmark System,
MMMod15(II: 159-170).
Springer DOI 1501
BibRef

Chellasamy, M., Ty Ferre, P.A., Humlekrog Greve, M.,
Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach,
Thematic14(63-69).
DOI Link 1404
BibRef

Plasencia-Calaña, Y.[Yenisel], Orozco-Alzate, M.[Mauricio], Méndez-Vázquez, H.[Heydi], García-Reyes, E.[Edel], Duin, R.P.W.[Robert P. W.],
Towards Scalable Prototype Selection by Genetic Algorithms with Fast Criteria,
SSSPR14(343-352).
Springer DOI 1408
BibRef

Henriques, J.F.[Joao F.], Carreira, J.[Joao], Caseiro, R.[Rui], Batista, J.P.[Jorge P.],
Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition,
ICCV13(2760-2767)
IEEE DOI 1403
block-diagonalization. For selecting training samples. BibRef

Vahdat, A.[Arash], Mori, G.[Greg],
Handling Uncertain Tags in Visual Recognition,
ICCV13(737-744)
IEEE DOI 1403
Gathering training data. BibRef

Wu, W.N.[Wei-Ning], Liu, Y.[Yang], Zeng, W.[Wei], Guo, M.[Maozu], Wang, C.Y.[Chun-Yu], Liu, X.Y.[Xiao-Yan],
Effective constructing training sets for object detection,
ICIP13(3377-3380)
IEEE DOI 1402
active learning; labeling cost; object detection; sampling strategy BibRef

Li, W.B.[Wen-Bin], Fritz, M.[Mario],
Recognizing Materials from Virtual Examples,
ECCV12(IV: 345-358).
Springer DOI 1210
BibRef

Castrillón-Santana, M.[Modesto], Hernández-Sosa, D.[Daniel], Lorenzo-Navarro, J.[Javier],
Viola-Jones Based Detectors: How Much Affects the Training Set?,
IbPRIA11(297-304).
Springer DOI 1106
BibRef

Hong, G.X.[Guo-Xiang], Huang, C.L.[Chung-Lin], Hsu, S.C.[Shih-Chung], Tsai, C.H.[Chi-Hung],
Optimal Training Set Selection for Video Annotation,
PSIVT10(7-14).
IEEE DOI 1011
BibRef

Hong, X.P.[Xiao-Peng], Zhao, G.Y.[Guo-Ying], Ren, H.Y.[Hao-Yu], Chen, X.L.[Xi-Lin],
Efficient Boosted Weak Classifiers for Object Detection,
SCIA13(205-214).
Springer DOI 1311
BibRef

Ren, H.Y.[Hao-Yu], Hong, X.P.[Xiao-Peng], Heng, C.K.[Cher-Keng], Liang, L.H.[Lu-Hong], Chen, X.L.[Xi-Lin],
A Sample Pre-mapping Method Enhancing Boosting for Object Detection,
ICPR10(3005-3008).
IEEE DOI 1008
Improve training efficiency BibRef

Eaton, R., Lowell, J., Snorrason, M., Irvine, J.M., Mills, J.,
Rapid training of image classifiers through adaptive, multi-frame sampling method,
AIPR08(1-7).
IEEE DOI 0810
BibRef

Jia, Y.Q.[Yang-Qing], Salzmann, M.[Mathieu], Darrell, T.J.[Trevor J.],
Learning cross-modality similarity for multinomial data,
ICCV11(2407-2414).
IEEE DOI 1201
BibRef

Christoudias, C.M.[C. Mario], Urtasun, R.[Raquel], Salzmann, M.[Mathieu], Darrell, T.J.[Trevor J.],
Learning to Recognize Objects from Unseen Modalities,
ECCV10(I: 677-691).
Springer DOI 1009
Modalities not in the training set are available. BibRef

Christoudias, C.M.[C. Mario], Urtasun, R.[Raquel], Kapoorz, A.[Ashish], Darrell, T.J.[Trevor J.],
Co-training with noisy perceptual observations,
CVPR09(2844-2851).
IEEE DOI 0906
BibRef

Lapedriza, À.[Àgata], Masip, D.[David], Vitrià, J.[Jordi],
A Hierarchical Approach for Multi-task Logistic Regression,
IbPRIA07(II: 258-265).
Springer DOI 0706
small number of samples for training. BibRef

Sugiyama, M.[Masashi], Blankertz, B.[Benjamin], Krauledat, M.[Matthias], Dornhege, G.[Guido], Müller, K.R.[Klaus-Robert],
Importance-Weighted Cross-Validation for Covariate Shift,
DAGM06(354-363).
Springer DOI 0610
Training points distribution differs from test data. BibRef

Kim, S.W.[Sang-Woon],
On Using a Dissimilarity Representation Method to Solve the Small Sample Size Problem for Face Recognition,
ACIVS06(1174-1185).
Springer DOI 0609
BibRef

Ren, J.L.[Jun-Ling],
A Pattern Selection Algorithm Based on the Generalized Confidence,
ICPR06(II: 824-827).
IEEE DOI 0609
Selecting the patterns that matter in training. BibRef

Cazes, T.B., Feitosa, R.Q., Mota, G.L.A.,
Automatic Selection of Training Samples for Multitemporal Image Classification,
ICIAR04(II: 389-396).
Springer DOI 0409
BibRef

Yang, C.B.[Chang-Bo], Dong, M.[Ming], Fotouhi, F.[Farshad],
Learning the Semantics in Image Retrieval: A Natural Language Processing Approach,
MMDE04(137).
IEEE DOI 0406
BibRef

Yang, C.B.[Chang-Bo], Dong, M.[Ming], Fotouhi, F.[Farshad],
Image Content Annotation Using Bayesian Framework and Complement Components Analysis,
ICIP05(I: 1193-1196).
IEEE DOI 0512
BibRef

Vázquez, F.D.[Fernando D.], Salvador Sánchez, J., Pla, F.[Filiberto],
Learning and Forgetting with Local Information of New Objects,
CIARP08(261-268).
Springer DOI 0809
BibRef

Vázquez, F.D.[Fernando D.], Salvador-Sánchez, J., Pla, F.[Filiberto],
A Stochastic Approach to Wilson's Editing Algorithm,
IbPRIA05(II:35).
Springer DOI 0509

See also Asymptotic properties of nearest neighbor rules using edited data. BibRef

Angelova, A.[Anelia], Abu-Mostafa, Y.[Yaser], Perona, P.[Pietro],
Pruning Training Sets for Learning of Object Categories,
CVPR05(I: 494-501).
IEEE DOI 0507
BibRef

Franco, A., Maltoni, D., Nanni, L.,
Reward-punishment editing,
ICPR04(IV: 424-427).
IEEE DOI 0409
Editing: remove patterns that are not classified correctly. (in the training set).
See also Asymptotic properties of nearest neighbor rules using edited data. BibRef

Kuhl, A., Kruger, L., Wohler, C., Kressel, U.,
Training of classifiers using virtual samples only,
ICPR04(III: 418-421).
IEEE DOI 0409
BibRef

Juszczak, P., Duin, R.P.W.,
Selective sampling based on the variation in label assignments,
ICPR04(III: 375-378).
IEEE DOI 0409
BibRef

Sprevak, D., Azuaje, F., Wang, H.,
A non-random data sampling method for classification model assessment,
ICPR04(III: 406-409).
IEEE DOI 0409
BibRef

Levin, A., Viola, P.A., Freund, Y.,
Unsupervised improvement of visual detectors using co-training,
ICCV03(626-633).
IEEE DOI 0311
Train detectors with limited data, then use that to label more data. Use training of 2 classifiers at once. Apply to vehicle tracking. BibRef

Kim, D.S.[Dong Sik], Lee, K.Y.[Kir-Yung],
Training sequence size in clustering algorithms and averaging single-particle images,
ICIP03(II: 435-438).
IEEE DOI 0312
BibRef

Johnson, A.Y., Sun, J.[Jie], Bobick, A.F.,
Using similarity scores from a small gallery to estimate recognition performance for larger galleries,
AMFG03(100-103).
IEEE DOI 0311
BibRef

Paredes, R., Vidal, E., Keysers, D.,
An evaluation of the WPE algorithm using tangent distance,
ICPR02(IV: 48-51).
IEEE DOI 0211
Weighted Prototype Editing. BibRef

Veeramachaneni, S.[Sriharsha], Nagy, G.[George],
Classifier Adaptation with Non-representative Training Data,
DAS02(123 ff.).
Springer DOI 0303
BibRef

Maletti, G., Ersbøll, B.K., Conradsen, K., Lira, J.,
An Initial Training Set Generation Scheme,
SCIA01(P-W3B). 0206
BibRef

Fursov, V.A.,
Training in Pattern Recognition from a Small Number of Observations Using Projections Onto Null-space,
ICPR00(Vol II: 785-788).
IEEE DOI 0009
BibRef

Miyamoto, T., Mitani, Y., Hamamoto, Y.,
Use of Bootstrap Samples in Quadratic Classifier Design,
ICPR00(Vol II: 789-792).
IEEE DOI 0009
BibRef

Mayer, H.A.[Helmut A.], Huber, R.[Reinhold],
ERC: Evolutionary Resample and Combine for Adaptive Parallel Training Data Set Selection,
ICPR98(Vol I: 882-885).
IEEE DOI 9808
BibRef

Takacs, B.[Barnabas], Sadovnik, L.[Lev], Wechsler, H.[Harry],
Optimal Training Set Design for 3D Object Recognition,
ICPR98(Vol I: 558-560).
IEEE DOI 9808
BibRef

Nedeljkovic, V., Milosavljevic, M.,
On the influence of the training set data preprocessing on neural networks training,
ICPR92(II:33-36).
IEEE DOI 9208
BibRef

Ferri, F.J., Vidal, E.,
Small sample size effects in the use of editing techniques,
ICPR92(II:607-610).
IEEE DOI 9208
BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Small Sample Sizes Issues, Data analysis, Training Sets .


Last update:Nov 26, 2024 at 16:40:19