Hierarchical Mixture of experts and the EM Algorithm,
NeurComp(6), 1994, pp. 181-214. Combining results. BibRef 9400
Hinton, G.E.[Geoffrey E.],
Training products of experts by minimizing contrastive divergence,
NeurComp(14), No. 8, 2002, pp. 1771-1800.
DOI Link BibRef 0200
Raudys, S.J.[Sarunas J.],
Experts' boasting in trainable fusion rules,
PAMI(25), No. 9, September 2003, pp. 1178-1182.
IEEE Abstract. 0309
Experts can lead to biases in fusion rules if training of experts and fusion rules use the same data. BibRef
A regularized minimum cross-entropy algorithm on mixtures of experts for time series prediction and curve detection,
PRL(27), No. 9, July 2006, pp. 947-955.
Elsevier DOI Regularization theory; Model selection; Time series prediction; Curve detection 0605
A mixture of experts committee machine to design compensators for intensity modulated radiation therapy,
PR(39), No. 9, September 2006, pp. 1704-1714.
Elsevier DOI 0606
Committee machines; Neural networks; Fuzzy C-means; Compensators; Radiation therapy BibRef
Biased Mixtures of Experts: Enabling Computer Vision Inference Under Data Transfer Limitations,
IP(29), 2020, pp. 7656-7667.
IEEE DOI 2007
Mixtures of experts, constrained data transfer, single shot object detection, single image super resolution, realtime action classification BibRef
Haredasht, F.N.[Fateme Nateghi],
Supervised fuzzy partitioning,
PR(97), 2020, pp. 107013.
Elsevier DOI 1910
Supervised k-means, Centroid-based clustering, Entropy-based regularization, Feature weighting, Mixtures of experts BibRef
Bicici, U.C.[Ufuk Can],
Conditional information gain networks as sparse mixture of experts,
PR(120), 2021, pp. 108151.
Elsevier DOI 2109
Machine learning, Deep learning, Conditional deep learning BibRef
Regularized Gradient Descent Training of Steered Mixture of Experts for Sparse Image Representation,
IEEE DOI 1809
Kernel, Training, Optimization, Logic gates, Task analysis, Gaussian mixture model, Sparse Image Representation, Denoising BibRef
Hard Mixtures of Experts for Large Scale Weakly Supervised Vision,
IEEE DOI 1711
Data models, Decoding, Logic gates, Predictive models, Standards, Training BibRef
Expert Gate: Lifelong Learning with a Network of Experts,
IEEE DOI 1711
Data models, Load modeling, Logic gates, Neural networks, Training, Training, data BibRef
Combining the advice of experts with randomized boosting for robust pattern recognition,
IEEE DOI 1408
decision making BibRef
Yuksel, S.E.[Seniha Esen],
Gader, P.D.[Paul D.],
Variational Mixture of Experts for Classification with Applications to Landmine Detection,
IEEE DOI 1008
Fancourt, C.L.[Craig L.],
Principe, J.C.[Jose C.],
Soft Competitive Principal Component Analysis Using the Mixture of Experts,
DARPA97(1071-1076). BibRef 9700
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Hierarchical Combination, Multi-Stage Classifiers .