21.3.6.2.8 Multi-Modal Emotion, Multimodal Emotion Recognition

Chapter Contents (Back)
Emotion Recognition. Multi-Modal Emotion. Speech Emotion.

Soleymani, M.[Mohammad], Lichtenauer, J., Pun, T.[Thierry], Pantic, M.[Maja],
A Multimodal Database for Affect Recognition and Implicit Tagging,
AffCom(3), No. 1, 2012, pp. 42-55.
IEEE DOI 1202
BibRef

Soleymani, M.[Mohammad], Pantic, M.[Maja], Pun, T.[Thierry],
Multimodal Emotion Recognition in Response to Videos,
AffCom(3), No. 2, 2012, pp. 211-223.
IEEE DOI 1208
BibRef

McKeown, G., Valstar, M.F., Cowie, R., Pantic, M., Schroder, M.,
The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent,
AffCom(3), No. 1, 2012, pp. 5-17.
IEEE DOI 1202
BibRef

Wagner, J., Andre, E., Lingenfelser, F., Kim, J.H.[Jong-Hwa],
Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data,
AffCom(2), No. 4, 2011, pp. 206-218.
IEEE DOI 1202
BibRef

Lu, K.[Kun], Zhang, X.[Xin],
Multimodal Affect Recognition Using Boltzmann Zippers,
IEICE(E96-D), No. 11, November 2013, pp. 2496-2499.
WWW Link. 1311
BibRef

Li, H.B.[Hui-Bin], Ding, H.X.[Hua-Xiong], Huang, D.[Di], Wang, Y.H.[Yun-Hong], Zhao, X.[Xi], Morvan, J.M.[Jean-Marie], Chen, L.M.[Li-Ming],
An efficient multimodal 2D + 3D feature-based approach to automatic facial expression recognition,
CVIU(140), No. 1, 2015, pp. 83-92.
Elsevier DOI 1509
Facial expression recognition BibRef

Zhen, Q.K.[Qing-Kai], Huang, D.[Di], Wang, Y.H.[Yun-Hong], Chen, L.M.[Li-Ming],
Muscular Movement Model-Based Automatic 3D/4D Facial Expression Recognition,
MultMed(18), No. 7, July 2016, pp. 1438-1450.
IEEE DOI 1608
BibRef
Earlier:
Muscular Movement Model Based Automatic 3D Facial Expression Recognition,
MMMod15(I: 522-533).
Springer DOI 1501
emotion recognition BibRef

Zhao, X.[Xi], Dellandrea, E.[Emmanuel], Chen, L.M.[Li-Ming], Kakadiaris, I.A.,
Accurate Landmarking of Three-Dimensional Facial Data in the Presence of Facial Expressions and Occlusions Using a Three-Dimensional Statistical Facial Feature Model,
SMC-B(41), No. 5, October 2011, pp. 1417-1428.
IEEE DOI 1110
BibRef
Earlier: A1, A2, A3, Only:
A 3D Statistical Facial Feature Model and Its Application on Locating Facial Landmarks,
ACIVS09(686-697).
Springer DOI 0909

See also unified probabilistic framework for automatic 3D facial expression analysis based on a Bayesian belief inference and statistical feature models, A. BibRef

Zhao, X.[Xi], Szeptycki, P.[Przemyslaw], Dellandrea, E.[Emmanuel], Chen, L.M.[Li-Ming],
Precise 2.5D facial landmarking via an analysis by synthesis approach,
WACV09(1-7).
IEEE DOI 0912
BibRef

Zhao, X.[Xi], Huang, D.[Di], Dellandrea, E.[Emmanuel], Chen, L.M.[Li-Ming],
Automatic 3D Facial Expression Recognition Based on a Bayesian Belief Net and a Statistical Facial Feature Model,
ICPR10(3724-3727).
IEEE DOI 1008
BibRef

Fu, H.Z.[Huan-Zhang], Xiao, Z.Z.[Zhong-Zhe], Dellandréa, E.[Emmanuel], Dou, W.B.[Wei-Bei], Chen, L.M.[Li-Ming],
Image Categorization Using ESFS: A New Embedded Feature Selection Method Based on SFS,
ACIVS09(288-299).
Springer DOI 0909
Feature selection. BibRef

Zhalehpour, S.[Sara], Akhtar, Z.[Zahid], Erdem, C.E.[Cigdem Eroglu],
Multimodal emotion recognition based on peak frame selection from video,
SIViP(10), No. 5, May 2016, pp. 827-834.
WWW Link. 1608
BibRef

Wen, H.W.[Hong-Wei], Liu, Y.[Yue], Rekik, I.[Islem], Wang, S.P.[Sheng-Pei], Chen, Z.Q.[Zhi-Qiang], Zhang, J.S.[Ji-Shui], Zhang, Y.[Yue], Peng, Y.[Yun], He, H.G.[Hui-Guang],
Multi-modal multiple kernel learning for accurate identification of Tourette syndrome children,
PR(63), No. 1, 2017, pp. 601-611.
Elsevier DOI 1612
Tourette syndrome BibRef

Tsalamlal, M.Y., Amorim, M., Martin, J., Ammi, M.,
Combining Facial Expression and Touch for Perceiving Emotional Valence,
AffCom(9), No. 4, October 2018, pp. 437-449.
IEEE DOI 1812
Face recognition, Visualization, Haptic interfaces, Emotion recognition, Human computer interaction, multimodality BibRef

Poria, S.[Soujanya], Majumder, N.[Navonil], Hazarika, D.[Devamanyu], Cambria, E.[Erik], Gelbukh, A.[Alexander], Hussain, A.[Amir],
Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines,
IEEE_Int_Sys(33), No. 6, November 2018, pp. 17-25.
IEEE DOI 1902
Role of speaker models, importance of different modalities, generalizability. Sentiment analysis, Feature extraction, Visualization, Emotion recognition, Affective computing, Intelligent systems BibRef

Lee, J.Y.[Ji-Young], Kim, S.[Sunok], Kim, S.R.[Seung-Ryong], Sohn, K.H.[Kwang-Hoon],
Multi-Modal Recurrent Attention Networks for Facial Expression Recognition,
IP(29), 2020, pp. 6977-6991.
IEEE DOI 2007
Face recognition, Image color analysis, Videos, Emotion recognition, Benchmark testing, Databases, Task analysis, attention mechanism BibRef

Nguyen, D.[Dung], Nguyen, K.[Kien], Sridharan, S.[Sridha], Dean, D.[David], Fookes, C.[Clinton],
Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition,
CVIU(174), 2018, pp. 33-42.
Elsevier DOI 1812
BibRef

Nguyen, D.[Dung], Nguyen, K.[Kien], Sridharan, S.[Sridha], Ghasemi, A.[Afsane], Dean, D.[David], Fookes, C.[Clinton],
Deep Spatio-Temporal Features for Multimodal Emotion Recognition,
WACV17(1215-1223)
IEEE DOI 1609
Convolution, Emotion recognition, Face, Feature extraction, Speech, Speech recognition, Streaming, media BibRef

Ghaleb, E., Popa, M., Asteriadis, S.,
Metric Learning-Based Multimodal Audio-Visual Emotion Recognition,
MultMedMag(27), No. 1, January 2020, pp. 37-48.
IEEE DOI 2004
Measurement, Emotion recognition, Visualization, Support vector machines, Feature extraction, Task analysis, Fisher vectors BibRef

Chen, H.F.[Hai-Feng], Jiang, D.M.[Dong-Mei], Sahli, H.[Hichem],
Transformer Encoder With Multi-Modal Multi-Head Attention for Continuous Affect Recognition,
MultMed(23), 2021, pp. 4171-4183.
IEEE DOI 2112
Emotion recognition, Context modeling, Feature extraction, Correlation, Computational modeling, Visualization, Redundancy, inter-modality interaction BibRef

Zhang, K.[Ke], Li, Y.Q.[Yuan-Qing], Wang, J.Y.[Jing-Yu], Wang, Z.[Zhen], Li, X.L.[Xue-Long],
Feature Fusion for Multimodal Emotion Recognition Based on Deep Canonical Correlation Analysis,
SPLetters(28), 2021, pp. 1898-1902.
IEEE DOI 2110
Feature extraction, Correlation, Emotion recognition, TV, Visualization, Analytical models, Logic gates, multimodal emotion recognition BibRef

Tseng, S.Y.[Shao-Yen], Narayanan, S.[Shrikanth], Georgiou, P.[Panayiotis],
Multimodal Embeddings From Language Models for Emotion Recognition in the Wild,
SPLetters(28), 2021, pp. 608-612.
IEEE DOI 2104
Acoustics, Task analysis, Feature extraction, Convolution, Emotion recognition, Context modeling, Bit error rate BibRef

Huynh, V.T.[Van Thong], Yang, H.J.[Hyung-Jeong], Lee, G.S.[Guee-Sang], Kim, S.H.[Soo-Hyung],
End-to-End Learning for Multimodal Emotion Recognition in Video With Adaptive Loss,
MultMedMag(28), No. 2, April 2021, pp. 59-66.
IEEE DOI 2107
Feature extraction, Convolution, Emotion recognition, Data mining, Face recognition, Visualization, Training data, Affective Computing BibRef

Nguyen, D.[Dung], Nguyen, D.T.[Duc Thanh], Zeng, R.[Rui], Nguyen, T.T.[Thanh Thi], Tran, S.N.[Son N.], Nguyen, T.[Thin], Sridharan, S.[Sridha], Fookes, C.[Clinton],
Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition,
MultMed(24), 2022, pp. 1313-1324.
IEEE DOI 2204
Emotion recognition, Feature extraction, Long short term memory, Visualization, Streaming media, Convolution, Auto-encoder, multimodal emotion recognition BibRef

Li, C.Q.[Chi-Qin], Xie, L.[Lun], Pan, H.[Hang],
Branch-Fusion-Net for Multi-Modal Continuous Dimensional Emotion Recognition,
SPLetters(29), 2022, pp. 942-946.
IEEE DOI 2205
Emotion recognition, Feature extraction, Convolution, Fuses, Convolutional neural networks, Data models, Context modeling, feature fusion BibRef

Gao, L.[Lei], Guan, L.[Ling],
A Discriminative Vectorial Framework for Multi-Modal Feature Representation,
MultMed(24), 2022, pp. 1503-1514.
IEEE DOI 2204
Semantics, Correlation, Task analysis, Emotion recognition, Visualization, Transforms, Image recognition, multi-modal hashing BibRef

Yang, D.K.[Ding-Kang], Huang, S.[Shuai], Liu, Y.[Yang], Zhang, L.H.[Li-Hua],
Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition,
SPLetters(29), 2022, pp. 2093-2097.
IEEE DOI 2211
Transformers, Emotion recognition, Convolution, Acoustics, Speech recognition, Stacking, Pipelines, Contextual interaction, speech emotion recognition BibRef


Gomaa, A.[Ahmed], Maier, A.[Andreas], Kosti, R.[Ronak],
Supervised Contrastive Learning for Robust and Efficient Multi-modal Emotion and Sentiment Analysis,
ICPR22(2423-2429)
IEEE DOI 2212
Training, Sentiment analysis, Computational modeling, Predictive models, Transformers, Robustness BibRef

Liu, H.Y.[Hai-Yang], Zhu, Z.[Zihao], Iwamoto, N.[Naoya], Peng, Y.C.[Yi-Chen], Li, Z.Q.[Zheng-Qing], Zhou, Y.[You], Bozkurt, E.[Elif], Zheng, B.[Bo],
BEAT: A Large-Scale Semantic and Emotional Multi-modal Dataset for Conversational Gestures Synthesis,
ECCV22(VII:612-630).
Springer DOI 2211
Dataset, Emotions. BibRef

Chudasama, V.[Vishal], Kar, P.[Purbayan], Gudmalwar, A.[Ashish], Shah, N.[Nirmesh], Wasnik, P.[Pankaj], Onoe, N.[Naoyuki],
M2FNet: Multi-modal Fusion Network for Emotion Recognition in Conversation,
MULA22(4651-4660)
IEEE DOI 2210
Human computer interaction, Emotion recognition, Visualization, Adaptation models, Benchmark testing, Feature extraction, Robustness BibRef

Patania, S.[Sabrina], d'Amelio, A.[Alessandro], Lanzarotti, R.[Raffaella],
Exploring Fusion Strategies in Deep Multimodal Affect Prediction,
CIAP22(II:730-741).
Springer DOI 2205
BibRef

Lusquino Filho, L.A.D., Oliveira, L.F.R., Carneiro, H.C.C., Guarisa, G.P., Filho, A.L., França, F.M.G., Lima, P.M.V.,
A weightless regression system for predicting multi-modal empathy,
FG20(657-661)
IEEE DOI 2102
Training, Random access memory, Mel frequency cepstral coefficient, Predictive models, regression wisard BibRef

Shao, J., Zhu, J., Wei, Y., Feng, Y., Zhao, X.,
Emotion Recognition by Edge-Weighted Hypergraph Neural Network,
ICIP19(2144-2148)
IEEE DOI 1910
Emotion recognition, edge-weighted hypergraph neural network, multi-modality BibRef

Guo, J., Zhou, S., Wu, J., Wan, J., Zhu, X., Lei, Z., Li, S.Z.,
Multi-modality Network with Visual and Geometrical Information for Micro Emotion Recognition,
FG17(814-819)
IEEE DOI 1707
Computer architecture, Emotion recognition, Face, Face recognition, Feature extraction, Geometry, Visualization BibRef

Wan, J., Escalera, S., Anbarjafari, G., Escalante, H.J., Baro, X., Guyon, I., Madadi, M., Allik, J., Gorbova, J., Lin, C., Xie, Y.,
Results and Analysis of ChaLearn LAP Multi-modal Isolated and Continuous Gesture Recognition, and Real Versus Fake Expressed Emotions Challenges,
EmotionComp17(3189-3197)
IEEE DOI 1802
Emotion recognition, Feature extraction, Gesture recognition, Skeleton, Spatiotemporal phenomena, Training BibRef

Ranganathan, H., Chakraborty, S., Panchanathan, S.,
Multimodal emotion recognition using deep learning architectures,
WACV16(1-9)
IEEE DOI 1606
Databases BibRef

Wei, H.L.[Hao-Lin], Monaghan, D.S.[David S.], O'Connor, N.E.[Noel E.], Scanlon, P.[Patricia],
A New Multi-modal Dataset for Human Affect Analysis,
HBU14(42-51).
Springer DOI 1411
Dataset, Human Affect. BibRef

Chen, S.Z.[Shi-Zhi], Tian, Y.L.[Ying-Li],
Margin-constrained multiple kernel learning based multi-modal fusion for affect recognition,
FG13(1-7)
IEEE DOI 1309
face recognition BibRef

Gajsek, R.[Rok], Štruc, V.[Vitomir], Mihelic, F.[France],
Multi-modal Emotion Recognition Using Canonical Correlations and Acoustic Features,
ICPR10(4133-4136).
IEEE DOI 1008
BibRef

Escalera, S.[Sergio], Puertas, E.[Eloi], Radeva, P.I.[Petia I.], Pujol, O.[Oriol],
Multi-modal laughter recognition in video conversations,
CVPR4HB09(110-115).
IEEE DOI 0906
BibRef

Chapter on Face Recognition, Detection, Tracking, Gesture Recognition, Fingerprints, Biometrics continues in
Emotion Recognition, from Other Than Faces .


Last update:Jan 29, 2023 at 20:54:24