*Jordan, M.I.*,
*Jacobs, R.A.*,

**Hierarchical Mixture of experts and the EM Algorithm**,

*NeurComp(6)*, 1994, pp. 181-214.
Combining results.
BibRef
**9400**

*Hinton, G.E.[Geoffrey E.]*,

**Training products of experts by minimizing contrastive divergence**,

*NeurComp(14)*, No. 8, 2002, pp. 1771-1800.

DOI Link
BibRef
**0200**

*Raudys, S.J.[Sarunas J.]*,

**Experts' boasting in trainable fusion rules**,

*PAMI(25)*, No. 9, September 2003, pp. 1178-1182.

IEEE Abstract.
**0309**

Experts can lead to biases in fusion rules if training of experts and
fusion rules use the same data.
BibRef

*Lu, Z.W.[Zhi-Wu]*,

**A regularized minimum cross-entropy algorithm on mixtures of experts
for time series prediction and curve detection**,

*PRL(27)*, No. 9, July 2006, pp. 947-955.

Elsevier DOI Regularization theory; Model selection; Time series prediction; Curve detection
**0605**

BibRef

*Goodband, J.H.*,
*Haas, O.C.L.*,
*Mills, J.A.*,

**A mixture of experts committee machine to design compensators for
intensity modulated radiation therapy**,

*PR(39)*, No. 9, September 2006, pp. 1704-1714.

Elsevier DOI
**0606**

Committee machines; Neural networks; Fuzzy C-means; Compensators;
Radiation therapy
BibRef

*Abbas, A.*,
*Andreopoulos, Y.*,

**Biased Mixtures of Experts: Enabling Computer Vision Inference Under
Data Transfer Limitations**,

*IP(29)*, 2020, pp. 7656-7667.

IEEE DOI
**2007**

Mixtures of experts, constrained data transfer,
single shot object detection, single image super resolution,
realtime action classification
BibRef

*Ashtari, P.[Pooya]*,
*Haredasht, F.N.[Fateme Nateghi]*,
*Beigy, H.[Hamid]*,

**Supervised fuzzy partitioning**,

*PR(97)*, 2020, pp. 107013.

Elsevier DOI
**1910**

Supervised k-means, Centroid-based clustering,
Entropy-based regularization, Feature weighting, Mixtures of experts
BibRef

*Bicici, U.C.[Ufuk Can]*,
*Akarun, L.[Lale]*,

**Conditional information gain networks as sparse mixture of experts**,

*PR(120)*, 2021, pp. 108151.

Elsevier DOI
**2109**

Machine learning, Deep learning, Conditional deep learning
BibRef

IEEE DOI

Predictive models, Routing, Probabilistic logic, Prediction algorithms, Data models, Classification algorithms BibRef

*Bochinski, E.*,
*Jongebloed, R.*,
*Tok, M.*,
*Sikora, T.*,

**Regularized Gradient Descent Training of Steered Mixture of Experts
for Sparse Image Representation**,

*ICIP18*(3873-3877)

IEEE DOI
**1809**

Kernel, Training, Optimization, Logic gates, Task analysis,
Gaussian mixture model, Sparse Image Representation,
Denoising
BibRef

*Gross, S.*,
*Ranzato, M.[Marc'Aurelio]*,
*Szlam, A.[Arthur]*,

**Hard Mixtures of Experts for Large Scale Weakly Supervised Vision**,

*CVPR17*(5085-5093)

IEEE DOI
**1711**

Data models, Decoding, Logic gates, Predictive models, Standards, Training
BibRef

*Aljundi, R.[Rahaf]*,
*Chakravarty, P.[Punarjay]*,
*Tuytelaars, T.[Tinne]*,

**Expert Gate: Lifelong Learning with a Network of Experts**,

*CVPR17*(7120-7129)

IEEE DOI
**1711**

Data models, Load modeling, Logic gates, Neural networks, Training,
Training, data
BibRef

*Peng, J.[Jing]*,
*Seetharaman, G.*,

**Combining the advice of experts with randomized boosting for robust
pattern recognition**,

*AIPR13*(1-7)

IEEE DOI
**1408**

decision making
BibRef

*Yuksel, S.E.[Seniha Esen]*,
*Gader, P.D.[Paul D.]*,

**Variational Mixture of Experts for Classification with Applications to
Landmine Detection**,

*ICPR10*(2981-2984).

IEEE DOI
**1008**

BibRef

*Fancourt, C.L.[Craig L.]*,
*Principe, J.C.[Jose C.]*,

**Soft Competitive Principal Component Analysis Using
the Mixture of Experts**,

*DARPA97*(1071-1076).
BibRef
**9700**

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in

Hierarchical Combination, Multi-Stage Classifiers .

Last update:Mar 6, 2023 at 16:04:36