Soleymani, M.[Mohammad],
Lichtenauer, J.,
Pun, T.[Thierry],
Pantic, M.[Maja],
A Multimodal Database for Affect Recognition and Implicit Tagging,
AffCom(3), No. 1, 2012, pp. 42-55.
IEEE DOI
1202
BibRef
Soleymani, M.[Mohammad],
Pantic, M.[Maja],
Pun, T.[Thierry],
Multimodal Emotion Recognition in Response to Videos,
AffCom(3), No. 2, 2012, pp. 211-223.
IEEE DOI
1208
BibRef
McKeown, G.,
Valstar, M.F.,
Cowie, R.,
Pantic, M.,
Schroder, M.,
The SEMAINE Database: Annotated Multimodal Records of Emotionally
Colored Conversations between a Person and a Limited Agent,
AffCom(3), No. 1, 2012, pp. 5-17.
IEEE DOI
1202
BibRef
Wagner, J.,
Andre, E.,
Lingenfelser, F.,
Kim, J.H.[Jong-Hwa],
Exploring Fusion Methods for Multimodal Emotion Recognition with
Missing Data,
AffCom(2), No. 4, 2011, pp. 206-218.
IEEE DOI
1202
BibRef
Lu, K.[Kun],
Zhang, X.[Xin],
Multimodal Affect Recognition Using Boltzmann Zippers,
IEICE(E96-D), No. 11, November 2013, pp. 2496-2499.
WWW Link.
1311
BibRef
Li, H.B.[Hui-Bin],
Ding, H.X.[Hua-Xiong],
Huang, D.[Di],
Wang, Y.H.[Yun-Hong],
Zhao, X.[Xi],
Morvan, J.M.[Jean-Marie],
Chen, L.M.[Li-Ming],
An efficient multimodal 2D + 3D feature-based approach to automatic
facial expression recognition,
CVIU(140), No. 1, 2015, pp. 83-92.
Elsevier DOI
1509
Facial expression recognition
BibRef
Zhen, Q.K.[Qing-Kai],
Huang, D.[Di],
Wang, Y.H.[Yun-Hong],
Chen, L.M.[Li-Ming],
Muscular Movement Model-Based Automatic 3D/4D Facial Expression
Recognition,
MultMed(18), No. 7, July 2016, pp. 1438-1450.
IEEE DOI
1608
BibRef
Earlier:
Muscular Movement Model Based Automatic 3D Facial Expression
Recognition,
MMMod15(I: 522-533).
Springer DOI
1501
emotion recognition
BibRef
Zhao, X.[Xi],
Dellandrea, E.[Emmanuel],
Chen, L.M.[Li-Ming],
Kakadiaris, I.A.,
Accurate Landmarking of Three-Dimensional Facial Data in the Presence
of Facial Expressions and Occlusions Using a Three-Dimensional
Statistical Facial Feature Model,
SMC-B(41), No. 5, October 2011, pp. 1417-1428.
IEEE DOI
1110
BibRef
Earlier: A1, A2, A3, Only:
A 3D Statistical Facial Feature Model and Its Application on Locating
Facial Landmarks,
ACIVS09(686-697).
Springer DOI
0909
See also unified probabilistic framework for automatic 3D facial expression analysis based on a Bayesian belief inference and statistical feature models, A.
BibRef
Zhao, X.[Xi],
Szeptycki, P.[Przemyslaw],
Dellandrea, E.[Emmanuel],
Chen, L.M.[Li-Ming],
Precise 2.5D facial landmarking via an analysis by synthesis approach,
WACV09(1-7).
IEEE DOI
0912
BibRef
Zhao, X.[Xi],
Huang, D.[Di],
Dellandrea, E.[Emmanuel],
Chen, L.M.[Li-Ming],
Automatic 3D Facial Expression Recognition Based on a Bayesian Belief
Net and a Statistical Facial Feature Model,
ICPR10(3724-3727).
IEEE DOI
1008
BibRef
Fu, H.Z.[Huan-Zhang],
Xiao, Z.Z.[Zhong-Zhe],
Dellandréa, E.[Emmanuel],
Dou, W.B.[Wei-Bei],
Chen, L.M.[Li-Ming],
Image Categorization Using ESFS:
A New Embedded Feature Selection Method Based on SFS,
ACIVS09(288-299).
Springer DOI
0909
Feature selection.
BibRef
Zhalehpour, S.[Sara],
Akhtar, Z.[Zahid],
Erdem, C.E.[Cigdem Eroglu],
Multimodal emotion recognition based on peak frame selection from video,
SIViP(10), No. 5, May 2016, pp. 827-834.
WWW Link.
1608
BibRef
Wen, H.W.[Hong-Wei],
Liu, Y.[Yue],
Rekik, I.[Islem],
Wang, S.P.[Sheng-Pei],
Chen, Z.Q.[Zhi-Qiang],
Zhang, J.S.[Ji-Shui],
Zhang, Y.[Yue],
Peng, Y.[Yun],
He, H.G.[Hui-Guang],
Multi-modal multiple kernel learning for accurate identification of
Tourette syndrome children,
PR(63), No. 1, 2017, pp. 601-611.
Elsevier DOI
1612
Tourette syndrome
BibRef
Tsalamlal, M.Y.,
Amorim, M.,
Martin, J.,
Ammi, M.,
Combining Facial Expression and Touch for Perceiving Emotional
Valence,
AffCom(9), No. 4, October 2018, pp. 437-449.
IEEE DOI
1812
Face recognition, Visualization, Haptic interfaces,
Emotion recognition, Human computer interaction,
multimodality
BibRef
Poria, S.[Soujanya],
Majumder, N.[Navonil],
Hazarika, D.[Devamanyu],
Cambria, E.[Erik],
Gelbukh, A.[Alexander],
Hussain, A.[Amir],
Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up
the Baselines,
IEEE_Int_Sys(33), No. 6, November 2018, pp. 17-25.
IEEE DOI
1902
Role of speaker models, importance of different modalities, generalizability.
Sentiment analysis, Feature extraction, Visualization,
Emotion recognition, Affective computing,
Intelligent systems
BibRef
Lee, J.Y.[Ji-Young],
Kim, S.[Sunok],
Kim, S.R.[Seung-Ryong],
Sohn, K.H.[Kwang-Hoon],
Multi-Modal Recurrent Attention Networks for Facial Expression
Recognition,
IP(29), 2020, pp. 6977-6991.
IEEE DOI
2007
Face recognition, Image color analysis, Videos,
Emotion recognition, Benchmark testing, Databases, Task analysis,
attention mechanism
BibRef
Nguyen, D.[Dung],
Nguyen, K.[Kien],
Sridharan, S.[Sridha],
Dean, D.[David],
Fookes, C.[Clinton],
Deep spatio-temporal feature fusion with compact bilinear pooling for
multimodal emotion recognition,
CVIU(174), 2018, pp. 33-42.
Elsevier DOI
1812
BibRef
Nguyen, D.[Dung],
Nguyen, K.[Kien],
Sridharan, S.[Sridha],
Ghasemi, A.[Afsane],
Dean, D.[David],
Fookes, C.[Clinton],
Deep Spatio-Temporal Features for Multimodal Emotion Recognition,
WACV17(1215-1223)
IEEE DOI
1609
Convolution, Emotion recognition, Face, Feature extraction, Speech,
Speech recognition, Streaming, media
BibRef
Selvaraj, A.[Arivazhagan],
Russel, N.S.[Newlin Shebiah],
Bimodal recognition of affective states with the features inspired
from human visual and auditory perception system,
IJIST(29), No. 4, 2019, pp. 584-598.
DOI Link
1911
emotion recognition, biologically inspired model, wavelet transform
BibRef
Wang, X.S.[Xu-Sheng],
Chen, X.[Xing],
Cao, C.J.[Cong-Jun],
Human emotion recognition by optimally fusing facial expression and
speech feature,
SP:IC(84), 2020, pp. 115831.
Elsevier DOI
2004
Facial expression recognition, Speech emotion recognition,
Bimodal fusion, Feature fusion, RNN
BibRef
Chen, H.F.[Hai-Feng],
Jiang, D.M.[Dong-Mei],
Sahli, H.[Hichem],
Transformer Encoder With Multi-Modal Multi-Head Attention for
Continuous Affect Recognition,
MultMed(23), 2021, pp. 4171-4183.
IEEE DOI
2112
Emotion recognition, Context modeling, Feature extraction,
Correlation, Computational modeling, Visualization, Redundancy,
inter-modality interaction
BibRef
Zhang, K.[Ke],
Li, Y.Q.[Yuan-Qing],
Wang, J.Y.[Jing-Yu],
Wang, Z.[Zhen],
Li, X.L.[Xue-Long],
Feature Fusion for Multimodal Emotion Recognition Based on Deep
Canonical Correlation Analysis,
SPLetters(28), 2021, pp. 1898-1902.
IEEE DOI
2110
Feature extraction, Correlation, Emotion recognition, TV,
Visualization, Analytical models, Logic gates,
multimodal emotion recognition
BibRef
Tseng, S.Y.[Shao-Yen],
Narayanan, S.[Shrikanth],
Georgiou, P.[Panayiotis],
Multimodal Embeddings From Language Models for Emotion Recognition in
the Wild,
SPLetters(28), 2021, pp. 608-612.
IEEE DOI
2104
Acoustics, Task analysis, Feature extraction, Convolution,
Emotion recognition, Context modeling, Bit error rate
BibRef
Huynh, V.T.[Van Thong],
Yang, H.J.[Hyung-Jeong],
Lee, G.S.[Guee-Sang],
Kim, S.H.[Soo-Hyung],
End-to-End Learning for Multimodal Emotion Recognition in Video With
Adaptive Loss,
MultMedMag(28), No. 2, April 2021, pp. 59-66.
IEEE DOI
2107
Feature extraction, Convolution, Emotion recognition, Data mining,
Face recognition, Visualization, Training data, Affective Computing
BibRef
Nguyen, D.[Dung],
Nguyen, D.T.[Duc Thanh],
Zeng, R.[Rui],
Nguyen, T.T.[Thanh Thi],
Tran, S.N.[Son N.],
Nguyen, T.[Thin],
Sridharan, S.[Sridha],
Fookes, C.[Clinton],
Deep Auto-Encoders With Sequential Learning for Multimodal
Dimensional Emotion Recognition,
MultMed(24), 2022, pp. 1313-1324.
IEEE DOI
2204
Emotion recognition, Feature extraction, Long short term memory,
Visualization, Streaming media, Convolution, Auto-encoder,
multimodal emotion recognition
BibRef
Li, C.Q.[Chi-Qin],
Xie, L.[Lun],
Pan, H.[Hang],
Branch-Fusion-Net for Multi-Modal Continuous Dimensional Emotion
Recognition,
SPLetters(29), 2022, pp. 942-946.
IEEE DOI
2205
Emotion recognition, Feature extraction, Convolution, Fuses,
Convolutional neural networks, Data models, Context modeling,
feature fusion
BibRef
Gao, L.[Lei],
Guan, L.[Ling],
A Discriminative Vectorial Framework for Multi-Modal Feature
Representation,
MultMed(24), 2022, pp. 1503-1514.
IEEE DOI
2204
Semantics, Correlation, Task analysis, Emotion recognition,
Visualization, Transforms, Image recognition,
multi-modal hashing
BibRef
Yang, D.K.[Ding-Kang],
Huang, S.[Shuai],
Liu, Y.[Yang],
Zhang, L.H.[Li-Hua],
Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion
Recognition,
SPLetters(29), 2022, pp. 2093-2097.
IEEE DOI
2211
Transformers, Emotion recognition, Convolution, Acoustics,
Speech recognition, Stacking, Pipelines, Contextual interaction,
speech emotion recognition
BibRef
Shukla, A.[Abhinav],
Petridis, S.[Stavros],
Pantic, M.[Maja],
Does Visual Self-Supervision Improve Learning of Speech
Representations for Emotion Recognition?,
AffCom(14), No. 1, January 2023, pp. 406-420.
IEEE DOI
2303
Visualization, Task analysis, Speech recognition,
Emotion recognition, Training, Image reconstruction,
cross-modal self-supervision
BibRef
Hu, J.X.[Jia-Xiong],
Huang, Y.[Yun],
Hu, X.Z.[Xiao-Zhu],
Xu, Y.Q.[Ying-Qing],
The Acoustically Emotion-Aware Conversational Agent With Speech
Emotion Recognition and Empathetic Responses,
AffCom(14), No. 1, January 2023, pp. 17-30.
IEEE DOI
2303
Emotion recognition, Speech recognition, Databases,
Sentiment analysis, Games, Convolutional neural networks,
intelligent agents
BibRef
Candemir, C.[Cemre],
Gonul, A.S.[Ali Saffet],
Selver, M.A.[M. Alper],
Automatic Detection of Emotional Changes Induced by Social Support
Loss Using fMRI,
AffCom(14), No. 1, January 2023, pp. 706-717.
IEEE DOI
2303
Functional magnetic resonance imaging, Task analysis, Games,
Transient analysis, Signal to noise ratio, Shape,
emotional change (EC)
BibRef
Ping, H.Q.[Huan-Qin],
Zhang, D.[Dong],
Zhu, S.[Suyang],
Li, J.H.[Jun-Hui],
Zhou, G.D.[Guo-Dong],
A Benchmark for Hierarchical Emotion Cause Extraction in Spoken
Dialogues,
SPLetters(30), 2023, pp. 558-562.
IEEE DOI
2305
Task analysis, Feature extraction, Emotion recognition,
Bit error rate, Oral communication, Data mining, Preforms, spoken dialogues
BibRef
Bhattacharya, P.[Prasanta],
Gupta, R.K.[Raj Kumar],
Yang, Y.P.[Yin-Ping],
Exploring the Contextual Factors Affecting Multimodal Emotion
Recognition in Videos,
AffCom(14), No. 2, April 2023, pp. 1547-1557.
IEEE DOI
2306
Emotion recognition, Videos, Visualization, Feature extraction,
Physiology, High performance computing, Distance measurement,
technology & devices for affective computing
BibRef
Li, W.[Wei],
Finding Needles in a Haystack: Recognizing Emotions Just From Your
Heart,
AffCom(14), No. 2, April 2023, pp. 1488-1505.
IEEE DOI
2306
Electrocardiography, Feature extraction, Heart,
Emotion recognition, Heart rate variability, Physiology,
finding needles in a haystack
BibRef
Chang, C.M.[Chun-Min],
Chao, G.Y.[Gao-Yi],
Lee, C.C.[Chi-Chun],
Enforcing Semantic Consistency for Cross Corpus Emotion Prediction
Using Adversarial Discrepancy Learning in Emotion,
AffCom(14), No. 2, April 2023, pp. 1098-1109.
IEEE DOI
2306
Databases, Semantics, Emotion recognition, Acoustic distortion,
Training, Nonlinear distortion, Correlation, domain adaptation
BibRef
Benssassi, E.M.[Esma Mansouri],
Ye, J.[Juan],
Investigating Multisensory Integration in Emotion Recognition Through
Bio-Inspired Computational Models,
AffCom(14), No. 2, April 2023, pp. 906-918.
IEEE DOI
2306
Feature extraction, Emotion recognition, Visualization,
Brain modeling, Support vector machines,
graph neural network
BibRef
Fu, C.Z.[Chang-Zeng],
Liu, C.R.[Chao-Ran],
Ishi, C.T.[Carlos Toshinori],
Ishiguro, H.[Hiroshi],
An Adversarial Training Based Speech Emotion Classifier With Isolated
Gaussian Regularization,
AffCom(14), No. 3, July 2023, pp. 2361-2374.
IEEE DOI
2310
BibRef
Su, B.H.[Bo-Hao],
Lee, C.C.[Chi-Chun],
Unsupervised Cross-Corpus Speech Emotion Recognition Using a
Multi-Source Cycle-GAN,
AffCom(14), No. 3, July 2023, pp. 1991-2004.
IEEE DOI
2310
BibRef
Latif, S.[Siddique],
Rana, R.[Rajib],
Khalifa, S.[Sara],
Jurdak, R.[Raja],
Schuller, B.[Björn],
Self Supervised Adversarial Domain Adaptation for Cross-Corpus and
Cross-Language Speech Emotion Recognition,
AffCom(14), No. 3, July 2023, pp. 1912-1926.
IEEE DOI
2310
BibRef
Bai, L.[Lei],
Chang, R.[Rui],
Chen, G.H.[Guang-Hui],
Zhou, Y.[Yu],
Speech-Visual Emotion Recognition via Modal Decomposition Learning,
SPLetters(30), 2023, pp. 1452-1456.
IEEE DOI
2310
BibRef
Shu, Y.[Yezhi],
Yang, P.[Pei],
Liu, N.[Niqi],
Zhang, S.[Shu],
Zhao, G.Z.[Guo-Zhen],
Liu, Y.J.[Yong-Jin],
Emotion Distribution Learning Based on Peripheral Physiological
Signals,
AffCom(14), No. 3, July 2023, pp. 2470-2483.
IEEE DOI
2310
BibRef
Mao, R.[Rui],
Liu, Q.[Qian],
He, K.[Kai],
Li, W.[Wei],
Cambria, E.[Erik],
The Biases of Pre-Trained Language Models: An Empirical Study on
Prompt-Based Sentiment Analysis and Emotion Detection,
AffCom(14), No. 3, July 2023, pp. 1743-1753.
IEEE DOI
2310
BibRef
Chen, X.H.[Xin-Hong],
Li, Q.[Qing],
Li, Z.X.[Zong-Xi],
Xie, H.R.[Hao-Ran],
Wang, F.L.[Fu Lee],
Wang, J.P.[Jian-Ping],
A Reinforcement Learning Based Two-Stage Model for Emotion Cause Pair
Extraction,
AffCom(14), No. 3, July 2023, pp. 1779-1790.
IEEE DOI
2310
BibRef
Hou, M.X.[Mi-Xiao],
Zhang, Z.[Zheng],
Liu, C.[Chang],
Lu, G.M.[Guang-Ming],
Semantic Alignment Network for Multi-Modal Emotion Recognition,
CirSysVideo(33), No. 9, September 2023, pp. 5318-5329.
IEEE DOI Code:
WWW Link.
2310
BibRef
Dai, Y.J.[Yi-Jing],
Li, Y.J.[Ying-Jian],
Chen, D.P.[Dong-Peng],
Li, J.X.[Jin-Xing],
Lu, G.M.[Guang-Ming],
Multimodal Decoupled Distillation Graph Neural Network for Emotion
Recognition in Conversation,
CirSysVideo(34), No. 10, October 2024, pp. 9910-9924.
IEEE DOI Code:
WWW Link.
2411
Emotion recognition, Graph neural networks, Context modeling,
Message passing, Visualization, multimodal fusion
BibRef
Hu, G.[Guimin],
Zhao, Y.[Yi],
Lu, G.M.[Guang-Ming],
Improving Representation With Hierarchical Contrastive Learning for
Emotion-Cause Pair Extraction,
AffCom(15), No. 4, October 2024, pp. 1997-2011.
IEEE DOI
2412
Self-supervised learning, Mutual information, Task analysis,
Data mining, Semantics, Labeling, Transformers,
contrastive predictive coding
BibRef
Deng, H.[Huan],
Yang, Z.G.[Zhen-Guo],
Hao, T.Y.[Tian-Yong],
Li, Q.[Qing],
Liu, W.[Wenyin],
Multimodal Affective Computing With Dense Fusion Transformer for
Inter- and Intra-Modality Interactions,
MultMed(25), 2023, pp. 6575-6587.
IEEE DOI
2311
integrate textual, acoustic, and visual information for multimodal
affective computing
BibRef
Zhu, T.[Tong],
Li, L.[Leida],
Yang, J.F.[Ju-Feng],
Zhao, S.C.[Si-Cheng],
Xiao, X.[Xiao],
Multimodal Emotion Classification With Multi-Level Semantic Reasoning
Network,
MultMed(25), 2023, pp. 6868-6880.
IEEE DOI
2311
BibRef
Wu, Y.C.[Yi-Chiao],
Chiu, L.W.[Li-Wen],
Lai, C.C.[Chun-Chih],
Wu, B.F.[Bing-Fei],
Lin, S.S.J.[Sunny S. J.],
Recognizing, Fast and Slow: Complex Emotion Recognition With Facial
Expression Detection and Remote Physiological Measurement,
AffCom(14), No. 4, October 2023, pp. 3177-3190.
IEEE DOI
2312
BibRef
Gu, Y.[Yu],
Zhang, X.[Xiang],
Yan, H.[Huan],
Huang, J.Y.[Jing-Yang],
Liu, Z.[Zhi],
Dong, M.[Mianxiong],
Ren, F.[Fuji],
WiFE: WiFi and Vision Based Unobtrusive Emotion Recognition via
Gesture and Facial Expression,
AffCom(14), No. 4, October 2023, pp. 2567-2581.
IEEE DOI
2312
BibRef
Li, S.Z.[Shu-Zhen],
Zhang, T.[Tong],
Chen, B.[Bianna],
Chen, C.L.P.[C. L. Philip],
MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal
Affective Analysis,
AffCom(14), No. 4, October 2023, pp. 2796-2809.
IEEE DOI
2312
BibRef
Tellamekala, M.K.[Mani Kumar],
Amiriparian, S.[Shahin],
Schuller, B.W.[Björn W.],
André, E.[Elisabeth],
Giesbrecht, T.[Timo],
Valstar, M.[Michel],
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for
Uncertainty-Aware Multimodal Emotion Recognition,
PAMI(46), No. 2, February 2024, pp. 805-822.
IEEE DOI
2401
BibRef
Li, J.[Jiang],
Wang, X.P.[Xiao-Ping],
Lv, G.Q.[Guo-Qing],
Zeng, Z.G.[Zhi-Gang],
GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation
Approach for Multimodal Conversational Emotion Recognition,
MultMed(26), 2024, pp. 77-89.
IEEE DOI
2401
BibRef
Palash, M.[Mijanur],
Bhargava, B.[Bharat],
EMERSK: Explainable Multimodal Emotion Recognition With Situational
Knowledge,
MultMed(26), 2024, pp. 2785-2794.
IEEE DOI
2402
Emotion recognition, Face recognition, Visualization,
Feature extraction, Convolutional neural networks, Reliability,
LSTM
BibRef
Mai, S.[Sijie],
Sun, Y.[Ya],
Xiong, A.[Aolin],
Zeng, Y.[Ying],
Hu, H.F.[Hai-Feng],
Multimodal Boosting: Addressing Noisy Modalities and Identifying
Modality Contribution,
MultMed(26), 2024, pp. 3018-3033.
IEEE DOI
2402
Noise measurement, Task analysis, Boosting,
Representation learning, Emotion recognition, Tensors,
multimodal emotion recognition
BibRef
Yang, K.[Kailai],
Zhang, T.[Tianlin],
Ananiadou, S.[Sophia],
Disentangled Variational Autoencoder for Emotion Recognition in
Conversations,
AffCom(15), No. 2, April 2024, pp. 508-518.
IEEE DOI
2406
Task analysis, Emotion recognition, Hidden Markov models,
Context modeling, Decoding, Oral communication,
disentangled representations
BibRef
Quiros, J.D.V.[Jose David Vargas],
Cabrera-Quiros, L.[Laura],
Oertel, C.[Catharine],
Hung, H.[Hayley],
Impact of Annotation Modality on Label Quality and Model Performance
in the Automatic Assessment of Laughter In-the-Wild,
AffCom(15), No. 2, April 2024, pp. 519-534.
IEEE DOI
2406
Annotations, Task analysis, Machine learning, Labeling,
Face recognition, Physiology, Cameras, Action recognition, mingling datasets
BibRef
Bensemann, J.[Joshua],
Cheena, H.[Hasnain],
Huang, D.T.J.[David Tse Jung],
Broadbent, E.[Elizabeth],
Williams, J.[Jonathan],
Wicker, J.[Jörg],
From What You See to What We Smell:
Linking Human Emotions to Bio-Markers in Breath,
AffCom(15), No. 2, April 2024, pp. 465-477.
IEEE DOI
2406
Motion pictures, Feature extraction, Data mining, Monitoring, Visualization,
Reliability, Machine learning, Machine learning, breath analysis
BibRef
Gao, Y.[Yuan],
Wang, L.B.[Long-Biao],
Liu, J.X.[Jia-Xing],
Dang, J.W.[Jian-Wu],
Okada, S.[Shogo],
Adversarial Domain Generalized Transformer for Cross-Corpus Speech
Emotion Recognition,
AffCom(15), No. 2, April 2024, pp. 697-708.
IEEE DOI
2406
Feature extraction, Task analysis, Transformers, Training,
Emotion recognition, Data models, Data mining,
domain generalization
BibRef
Chawla, K.[Kushal],
Clever, R.[Rene],
Ramirez, J.[Jaysa],
Lucas, G.M.[Gale M.],
Gratch, J.[Jonathan],
Towards Emotion-Aware Agents for Improved User Satisfaction and
Partner Perception in Negotiation Dialogues,
AffCom(15), No. 2, April 2024, pp. 433-444.
IEEE DOI
2406
Emotion recognition, Task analysis, Particle measurements,
Atmospheric measurements, Training, Oral communication, Metadata,
user satisfaction
BibRef
Sun, T.[Teng],
Wei, Y.W.[Yin-Wei],
Ni, J.T.[Jun-Tong],
Liu, Z.X.[Zi-Xin],
Song, X.M.[Xue-Meng],
Wang, Y.W.[Yao-Wei],
Nie, L.Q.[Li-Qiang],
Muti-Modal Emotion Recognition via Hierarchical Knowledge
Distillation,
MultMed(26), 2024, pp. 9036-9046.
IEEE DOI
2408
Feature extraction, Emotion recognition, Optimization,
Predictive models, Acoustics, Visualization, Contrastive learning,
multi-modal representation learning
BibRef
Qi, X.Q.[Xing-Qun],
Liu, C.[Chen],
Li, L.[Lincheng],
Hou, J.[Jie],
Xin, H.R.[Hao-Ran],
Yu, X.[Xin],
EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture
Generation,
MultMed(26), 2024, pp. 10420-10430.
IEEE DOI
2411
Feature extraction, Correlation, Avatars, Task analysis,
Solid modeling, Computational modeling, Emotion extraction, temporal smooth
BibRef
Chen, H.J.[Hai-Jiao],
Zhao, H.[Huan],
Zhang, Z.X.[Zi-Xing],
Gradient-Level Differential Privacy Against Attribute Inference
Attack for Speech Emotion Recognition,
SPLetters(31), 2024, pp. 3124-3128.
IEEE DOI
2411
Training, Privacy, Differential privacy, Protection,
Hidden Markov models, Feature extraction, Predictive models,
speech emotion recognition
BibRef
Liu, K.[Ke],
Wei, J.[Jiwei],
Zou, J.[Jie],
Wang, P.[Peng],
Yang, Y.[Yang],
Shen, H.T.[Heng Tao],
Improving Pre-Trained Model-Based Speech Emotion Recognition From a
Low-Level Speech Feature Perspective,
MultMed(26), 2024, pp. 10623-10636.
IEEE DOI
2411
Feature extraction, Mel frequency cepstral coefficient,
Task analysis, Speech recognition, Emotion recognition,
speech emotion recognition
BibRef
Li, J.[Jiang],
Wang, X.P.[Xiao-Ping],
Liu, Y.J.[Ying-Jian],
Zeng, Z.G.[Zhi-Gang],
CFN-ESA: A Cross-Modal Fusion Network With Emotion-Shift Awareness
for Dialogue Emotion Recognition,
AffCom(15), No. 4, October 2024, pp. 1919-1933.
IEEE DOI
2412
Emotion recognition, Context modeling, Task analysis, Data mining,
Visualization, Acoustics, Data models,
emotion shift
BibRef
Liu, R.[Rui],
Zuo, H.L.[Hao-Lin],
Lian, Z.[Zheng],
Schuller, B.W.[Björn W.],
Li, H.Z.[Hai-Zhou],
Contrastive Learning Based Modality-Invariant Feature Acquisition for
Robust Multimodal Emotion Recognition With Missing Modalities,
AffCom(15), No. 4, October 2024, pp. 1856-1873.
IEEE DOI
2412
Emotion recognition, Feature extraction, Training, Image reconstruction,
Self-supervised learning, missing modality imagination
BibRef
Chou, H.C.[Huang-Cheng],
Goncalves, L.[Lucas],
Leem, S.G.[Seong-Gyun],
Salman, A.N.[Ali N.],
Lee, C.C.[Chi-Chun],
Busso, C.[Carlos],
Minority Views Matter: Evaluating Speech Emotion Classifiers With
Human Subjective Annotations by an All-Inclusive Aggregation Rule,
AffCom(16), No. 1, January 2025, pp. 41-55.
IEEE DOI
2503
Task analysis, Training, Annotations, Affective computing,
Emotion recognition, Data models, Vectors,
subjective perception
BibRef
Provost, E.M.[Emily Mower],
Sperry, S.H.[Sarah H],
Tavernor, J.[James],
Anderau, S.[Steve],
Yocum, A.[Anastasia],
McInnis, M.G.[Melvin G],
Emotion Recognition in the Real World: Passively Collecting and
Estimating Emotions From Natural Speech Data of Individuals With
Bipolar Disorder,
AffCom(16), No. 1, January 2025, pp. 28-40.
IEEE DOI
2503
Mood, Emotion recognition, Pipelines, Feature extraction,
Speech recognition, Cryptography, Depression,
bipolar disorder
BibRef
Lu, N.N.[Nan-Nan],
Han, Z.Y.[Zhi-Yuan],
Tan, Z.[Zhen],
A Hypergraph Based Contextual Relationship Modeling Method for
Multimodal Emotion Recognition in Conversation,
MultMed(27), 2025, pp. 2243-2255.
IEEE DOI
2505
Emotion recognition, Context modeling, Oral communication, Data models,
Long short term memory, Feature extraction, Semantics, hypergraph convolution
BibRef
Chien, W.S.[Woan-Shiuan],
Upadhyay, S.G.[Shreya G.],
Lin, W.C.[Wei-Cheng],
Busso, C.[Carlos],
Lee, C.C.[Chi-Chun],
Differential Impacts of Monologue and Conversation on Speech Emotion
Recognition,
AffCom(16), No. 2, April 2025, pp. 485-498.
IEEE DOI
2506
Emotion recognition, Acoustics, Databases, Oral communication,
Training, Speech recognition, Affective computing, Data collection,
acoustic variability
BibRef
Liu, Y.[Yang],
Chen, X.[Xin],
Li, Y.W.[Yong-Wei],
Wang, L.B.[Long-Biao],
Zhao, Z.[Zhen],
Multi-Stage Confidence-Guided Diffusion and Emotional Bidirectional
Mamba for Robust Speech Emotion Recognition,
SPLetters(32), 2025, pp. 2184-2188.
IEEE DOI
2506
Noise measurement, Feature extraction, Speech recognition,
Mel frequency cepstral coefficient, Noise, Emotion recognition,
diffusion model
BibRef
Chang, Y.[Yi],
Ren, Z.[Zhao],
Zhang, Z.X.[Zi-Xing],
Jing, X.[Xin],
Qian, K.[Kun],
Shao, X.[Xi],
Hu, B.[Bin],
Schultz, T.[Tanja],
Schuller, B.W.[Björn W.],
STAA-Net: A Sparse and Transferable Adversarial Attack for Speech
Emotion Recognition,
AffCom(16), No. 2, April 2025, pp. 861-874.
IEEE DOI
2506
Perturbation methods, Emotion recognition, Speech processing,
Robustness, Iterative methods, Feature extraction, end-to-end
BibRef
Wang, Y.[Ye],
Zhang, W.[Wei],
Liu, K.[Ke],
Wu, W.[Wei],
Hu, F.[Feng],
Yu, H.[Hong],
Wang, G.Y.[Guo-Yin],
Dynamic Emotion-Dependent Network with Relational Subgraph
Interaction for Multimodal Emotion Recognition,
AffCom(16), No. 2, April 2025, pp. 712-725.
IEEE DOI
2506
Emotion recognition, Context modeling, Computational modeling,
Oral communication, Affective computing, Visualization, relational subgraph
BibRef
Yan, T.H.[Tian-Hao],
Meng, H.[Hao],
Parada-Cabaleiro, E.[Emilia],
Tao, J.H.[Jian-Hua],
Li, T.[Taihao],
Schuller, B.W.[Björn W.],
A Residual Multi-Scale Convolutional Neural Network With Transformers
for Speech Emotion Recognition,
AffCom(16), No. 2, April 2025, pp. 915-932.
IEEE DOI
2506
Feature extraction, Transformers, Emotion recognition,
Speech recognition, Spectrogram, Encoding, Data mining, attention mechanism
BibRef
Shou, Y.T.[Yun-Tao],
Liu, H.[Huan],
Cao, X.[Xiangyong],
Meng, D.Y.[De-Yu],
Dong, B.[Bo],
A Low-Rank Matching Attention Based Cross-Modal Feature Fusion Method
for Conversational Emotion Recognition,
AffCom(16), No. 2, April 2025, pp. 1177-1189.
IEEE DOI
2506
Feature extraction, Emotion recognition, Transformers, Vectors,
Semantics, Tensors, Fuses, Computational complexity, Overfitting,
multimodal emotion recognition
BibRef
Fang, Y.B.[Yuan-Bo],
Xing, X.F.[Xiao-Fen],
Chu, Z.J.[Zhao-Jie],
Du, Y.F.[Yi-Feng],
Xu, X.M.[Xiang-Min],
Individual-Aware Attention Modulation for Unseen Speaker Emotion
Recognition,
AffCom(16), No. 2, April 2025, pp. 1205-1218.
IEEE DOI
2506
Modulation, Emotion recognition, Adaptation models,
Feature extraction, Transformers, Long short term memory, Training,
attention modulation
BibRef
Li, H.R.[Heng-Rui],
Zhang, Y.B.[Yong-Bing],
Liu, S.H.[Shao-Hui],
AMH-Net: Adaptive Multi-Band Hybrid-Aware Network for Emotion
Recognition in Speech,
SPLetters(32), 2025, pp. 2344-2348.
IEEE DOI
2507
Feature extraction, Emotion recognition, Convolution,
Speech recognition, Data mining, Attention mechanisms,
depth regulator
BibRef
Fu, Y.K.[Yuan-Kang],
Yang, K.X.[Kai-Xiang],
Sun, S.[Song],
Gong, X.R.[Xin-Rong],
Zeng, H.Q.[Huan-Qiang],
HIA-Net: Hierarchical Interactive Alignment Network for Multimodal
Few-Shot Emotion Recognition,
SPLetters(32), 2025, pp. 2679-2683.
IEEE DOI
2507
Electroencephalography, Feature extraction, Emotion recognition,
Training, Physiology, Brain modeling, Few shot learning, Data mining,
domain adaptation
BibRef
Yin, W.[Wen],
Wang, Y.[Yong],
Duan, G.[Guiduo],
Zhang, D.Y.[Dong-Yang],
Hu, X.[Xin],
Li, Y.F.[Yuan-Fang],
He, T.[Tao],
Knowledge-Aligned Counterfactual-Enhancement Diffusion Perception for
Unsupervised Cross-Domain Visual Emotion Recognition,
CVPR25(3888-3898)
IEEE DOI Code:
WWW Link.
2508
Visualization, Emotion recognition, Adaptation models, Limiting,
Benchmark testing, Diffusion models, visual emotion recognition,
diffusion model
BibRef
Wang, J.[Jing],
Feng, Z.Y.[Zhi-Yang],
Ning, X.J.[Xiao-Jun],
Lin, Y.[Youfang],
Chen, B.D.[Ba-Dong],
Jia, Z.Y.[Zi-Yu],
Two-Stream Dynamic Heterogeneous Graph Recurrent Neural Network for
Multi-Label Multi-Modal Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 2396-2409.
IEEE DOI
2509
Feature extraction, Physiology, Correlation, Emotion recognition,
Brain modeling, Electroencephalography, Robustness,
graph recurrent neural network
BibRef
Cheng, C.[Cheng],
Liu, W.Z.[Wen-Zhe],
Wang, X.[Xinying],
Feng, L.[Lin],
Jia, Z.Y.[Zi-Yu],
DISD-Net: A Dynamic Interactive Network With Self-Distillation for
Cross-Subject Multi-Modal Emotion Recognition,
MultMed(27), 2025, pp. 4643-4655.
IEEE DOI
2509
Brain modeling, Emotion recognition, Feature extraction,
Electroencephalography, Adaptation models, Training,
self-distillation
BibRef
Zorenböhmer, C.[Christina],
Gandhi, S.[Shaily],
Schmidt, S.[Sebastian],
Resch, B.[Bernd],
An Aspect-Based Emotion Analysis Approach on Wildfire-Related
Geo-Social Media Data: A Case Study of the 2020 California Wildfires,
IJGI(14), No. 8, 2025, pp. 301.
DOI Link
2509
BibRef
Ahn, C.S.[Chung-Soo],
Rana, R.[Rajib],
Busso, C.[Carlos],
Rajapakse, J.C.[Jagath C.],
Multitask Transformer for Cross-Corpus Speech Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 1581-1591.
IEEE DOI
2509
Transformers, Emotion recognition, Contrastive learning, Data models,
Speech recognition, Training, Spectrogram, transformers
BibRef
Zhao, Z.[Ziping],
Liu, J.X.[Ji-Xin],
Wang, H.[Haishuai],
Bandara, D.[Danushka],
Tao, J.H.[Jian-Hua],
A Knowledge Distillation-Based Approach to Speech Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 1307-1317.
IEEE DOI
2509
Computational modeling, Training, Speech recognition, Knowledge transfer,
Adaptation models, Computer architecture, transformer
BibRef
Castorena, C.[Carlos],
Cobos, M.[Maximo],
Ferri, F.J.[Francesc J.],
An Incremental Selection Method for Semi-Supervised Speaker
Adaptation in Speech Emotion Recognition,
SPLetters(32), 2025, pp. 2873-2877.
IEEE DOI
2509
Adaptation models, Training, Data models, Speech recognition,
Emotion recognition, Speech processing,
speech emotion recognition (SER)
BibRef
Derington, A.[Anna],
Wierstorf, H.[Hagen],
Özkil, A.[Ali],
Eyben, F.[Florian],
Burkhardt, F.[Felix],
Schuller, B.W.[Björn W.],
Testing Correctness, Fairness, and Robustness of Speech Emotion
Recognition Models,
AffCom(16), No. 3, July 2025, pp. 1929-1941.
IEEE DOI
2509
Robustness, Testing, Predictive models, Data models,
Emotion recognition, Databases, Correlation, Training, Security,
speech emotion recognition
BibRef
Li, Q.F.[Qi-Fei],
Gao, Y.M.[Ying-Ming],
Wen, Y.H.[Yu-Hua],
Zhao, Z.[Ziping],
Li, Y.[Ya],
Schuller, B.W.[Björn W.],
SeeNet: A Soft Emotion Expert and Data Augmentation Method to Enhance
Speech Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 2142-2156.
IEEE DOI
2509
Emotion recognition, Speech recognition, Robustness, Transfer learning,
Data models, Data augmentation, Training, data augmentation
BibRef
Upadhyay, S.G.[Shreya G.],
Martinez-Lucas, L.[Luz],
Katz, W.[William],
Busso, C.[Carlos],
Lee, C.C.[Chi-Chun],
Phonetically-Anchored Domain Adaptation for Cross-Lingual Speech
Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 1631-1645.
IEEE DOI
2509
Phonetics, Linguistics, Emotion recognition, Adaptation models,
Acoustics, Training, Affective computing, Few shot learning,
transfer learning
BibRef
Oh, H.S.[Hyung-Seok],
Lee, S.H.[Sang-Hoon],
Cho, D.H.[Deok-Hyeon],
Lee, S.W.[Seong-Whan],
DurFlex-EVC: Duration-Flexible Emotional Voice Conversion Leveraging
Discrete Representations Without Text Alignment,
AffCom(16), No. 3, July 2025, pp. 1660-1674.
IEEE DOI
2509
Feature extraction, Autoencoders, Context modeling, Transformers,
Acoustics, Speech recognition, Computational modeling, Vocoders,
style disentanglement
BibRef
Shen, Y.L.[Yih-Liang],
Hsieh, P.C.[Pei-Chin],
Chi, T.S.[Tai-Shih],
Spectro-Temporal Modulations Incorporated Two-Stream Robust Speech
Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 1693-1704.
IEEE DOI
2509
Modulation, Filters, Feature extraction, Noise,
Time-frequency analysis, Emotion recognition, Speech recognition,
spectral-temporal modulation
BibRef
Nfissi, A.[Alaa],
Bouachir, W.[Wassim],
Bouguila, N.[Nizar],
Mishara, B.[Brian],
SigWavNet: Learning Multiresolution Signal Wavelet Network for Speech
Emotion Recognition,
AffCom(16), No. 3, July 2025, pp. 1839-1854.
IEEE DOI
2509
Feature extraction, Transforms, Wavelet transforms,
Discrete wavelet transforms, Speech recognition, Bi-GRU
BibRef
Palmero, C.[Cristina],
deVelasco, M.[Mikel],
Hmani, M.A.[Mohamed Amine],
Mtibaa, A.[Aymen],
Ben Letaifa, L.[Leila],
Buch-Cardona, P.[Pau],
Justo, R.[Raquel],
Amorese, T.[Terry],
González-Fraile, E.[Eduardo],
Fernández-Ruanova, B.[Begoña],
Tenorio-Laranga, J.[Jofre],
Johansen, A.T.[Anna Torp],
da Silva, M.R.[Micaela Rodrigues],
Martinussen, L.J.[Liva Jenny],
Korsnes, M.S.[Maria Stylianou],
Cordasco, G.[Gennaro],
Esposito, A.[Anna],
El-Yacoubi, M.A.[Mounim A.],
Petrovska-Delacrétaz, D.[Dijana],
Torres, M.I.[M. Inés],
Escalera, S.[Sergio],
Exploring Emotion Expression Recognition in Older Adults Interacting
With a Virtual Coach,
AffCom(16), No. 3, July 2025, pp. 2303-2320.
IEEE DOI
2509
Emotion recognition, Older adults, Feature extraction,
Speech recognition, Annotations, Aging, Affective computing,
virtual assistants
BibRef
Cho, D.H.[Deok-Hyeon],
Oh, H.S.[Hyung-Seok],
Kim, S.B.[Seung-Bin],
Lee, S.W.[Seong-Whan],
EmoSphere++: Emotion-Controllable Zero-Shot Text-to-Speech Via
Emotion-Adaptive Spherical Vector,
AffCom(16), No. 3, July 2025, pp. 2365-2380.
IEEE DOI
2509
Vectors, Text to speech, Psychology, Complexity theory,
Interpolation, Emotion recognition, Annotations, Wheels, Training,
zero-shot text-to-speech
BibRef
Cao, Y.[YuKun],
Huang, L.[Luobin],
Tang, Y.J.[Yi-Jia],
PeTracker: Poincaré-Based Dual-Strategy Emotion Tracker for Emotion
Recognition in Conversation,
AffCom(16), No. 3, July 2025, pp. 2020-2032.
IEEE DOI
2509
Emotion recognition, Contrastive learning, Semantics,
Context modeling, Oral communication, Feature extraction, Training,
emotion recognition in conversation
BibRef
Shen, S.Y.[Si-Yuan],
Liu, F.[Feng],
Wang, H.Y.[Han-Yang],
Zhou, A.[Aimin],
Towards Speaker-Unknown Emotion Recognition in Conversation via
Progressive Contrastive Deep Supervision,
AffCom(16), No. 3, July 2025, pp. 2261-2273.
IEEE DOI
2509
Emotion recognition, Training, Feature extraction,
Oral communication, Speaker recognition, Affective computing,
deep supervision
BibRef
Yang, Z.Y.[Zhen-Yu],
Zhang, Z.B.[Zhi-Bo],
Cheng, Y.[Yuhu],
Zhang, T.[Tong],
Wang, X.S.[Xue-Song],
Semantic and Emotional Dual Channel for Emotion Recognition in
Conversation,
AffCom(16), No. 3, July 2025, pp. 1885-1902.
IEEE DOI
2509
Emotion recognition, Semantics, Context modeling, Accuracy,
Knowledge engineering, Data mining, Analytical models,
dialogue emotion propagation graph
BibRef
Li, J.[Jiang],
Wang, X.P.[Xiao-Ping],
Zeng, Z.G.[Zhi-Gang],
Tracing Intricate Cues in Dialogue: Joint Graph Structure and
Sentiment Dynamics for Multimodal Emotion Recognition,
PAMI(47), No. 10, October 2025, pp. 8786-8803.
IEEE DOI
2510
Emotion recognition, Oral communication, Data mining, Sentiment analysis,
Context modeling, Affective computing, graph neural networks
BibRef
Shi, Q.[QingHongYa],
Ye, M.[Mang],
Huang, W.K.[Wen-Ke],
Du, B.[Bo],
Zong, X.F.[Xiao-Fen],
Gradient and Structure Consistency in Multimodal Emotion Recognition,
IP(34), 2025, pp. 6180-6191.
IEEE DOI Code:
WWW Link.
2510
Emotion recognition, Noise, Optimization, Visualization, Training,
Correlation, Learning systems, Feature extraction, Data mining,
multimodal learning
BibRef
Li, Z.M.[Zi-Ming],
Liu, Y.X.[Ya-Xin],
Yang, C.P.[Chuan-Peng],
Zhou, Y.[Yan],
Hu, S.L.[Song-Lin],
ROSA: A Robust Self-Adaptive Model for Multimodal Emotion Recognition
With Uncertain Missing Modalities,
MultMed(27), 2025, pp. 6766-6779.
IEEE DOI
2510
Translation, Visualization, Feature extraction, Emotion recognition,
Training, Adaptation models, Vectors, vision-language large model
BibRef
de Mattos, F.L.[Flavia Letícia],
Pellenz, M.E.[Marcelo E.],
de Souza Britto, A.[Alceu],
Time Distributed Multiview Representation for Speech Emotion
Recognition,
CIARP23(I:148-162).
Springer DOI
2312
BibRef
Moroto, Y.[Yuya],
Maeda, K.[Keisuke],
Ogawa, T.[Takahiro],
Haseyama, M.[Miki],
Multi-View Variational Recurrent Neural Network for Human Emotion
Recognition Using Multi-Modal Biological Signals,
ICIP23(2925-2929)
IEEE DOI
2312
BibRef
Low, Y.Y.[Yin-Yin],
Phan, R.C.W.[Raphaël C.W.],
Pal, A.[Arghya],
Chang, X.J.[Xiao-Jun],
USURP: Universal Single-Source Adversarial Perturbations on
Multimodal Emotion Recognition,
ICIP23(2150-2154)
IEEE DOI
2312
BibRef
Srivastava, D.[Dhruv],
Singh, A.K.[Aditya Kumar],
Tapaswi, M.[Makarand],
How You Feelin'? Learning Emotions and Mental States in Movie Scenes,
CVPR23(2517-2528)
IEEE DOI
2309
BibRef
Zhang, S.[Sitao],
Pan, Y.[Yimu],
Wang, J.Z.[James Z.],
Learning Emotion Representations from Verbal and Nonverbal
Communication,
CVPR23(18993-19004)
IEEE DOI
2309
BibRef
Zhang, Z.C.[Zhi-Cheng],
Wang, L.J.[Li-Juan],
Yang, J.F.[Ju-Feng],
Weakly Supervised Video Emotion Detection and Prediction via
Cross-Modal Temporal Erasing Network,
CVPR23(18888-18897)
IEEE DOI
2309
BibRef
Li, Y.[Yong],
Wang, Y.Z.[Yuan-Zhi],
Cui, Z.[Zhen],
Decoupled Multimodal Distilling for Emotion Recognition,
CVPR23(6631-6640)
IEEE DOI
2309
BibRef
Xu, C.[Chao],
Zhu, J.W.[Jun-Wei],
Zhang, J.N.[Jiang-Ning],
Han, Y.[Yue],
Chu, W.Q.[Wen-Qing],
Tai, Y.[Ying],
Wang, C.J.[Cheng-Jie],
Xie, Z.F.[Zhi-Feng],
Liu, Y.[Yong],
High-Fidelity Generalized Emotional Talking Face Generation with
Multi-Modal Emotion Space Learning,
CVPR23(6609-6619)
IEEE DOI
2309
BibRef
Wang, S.[Sen],
Zhang, J.N.[Jiang-Ning],
Tan, X.[Xin],
Xie, Z.F.[Zhi-Feng],
Wang, C.J.[Cheng-Jie],
Ma, L.Z.[Li-Zhuang],
MMoFusion: Multi-modal co-speech motion generation with diffusion
model,
PR(169), 2026, pp. 111774.
Elsevier DOI
2509
Multi-model learning, Human motion synthesis, Diffusion model
BibRef
Palotti, J.[Joao],
Narula, G.[Gagan],
Raheem, L.[Lekan],
Bay, H.[Herbert],
Analysis of Emotion Annotation Strength Improves Generalization in
Speech Emotion Recognition Models,
ABAW23(5829-5837)
IEEE DOI
2309
BibRef
Hayat, H.[Hassan],
Ventura, C.[Carles],
Lapedriza, A.[Agata],
Predicting the Subjective Responses' Emotion in Dialogues with
Multi-task Learning,
IbPRIA23(693-704).
Springer DOI
2307
BibRef
Dong, K.[Ke],
Peng, H.[Hao],
Che, J.[Jie],
Dynamic-static Cross Attentional Feature Fusion Method for Speech
Emotion Recognition,
MMMod23(II: 350-361).
Springer DOI
2304
BibRef
Parameshwara, R.[Ravikiran],
Radwan, I.[Ibrahim],
Subramanian, R.[Ramanathan],
Goecke, R.[Roland],
Examining Subject-Dependent and Subject-Independent Human Affect
Inference from Limited Video Data,
FG23(1-6)
IEEE DOI
2303
Training, Databases, Annotations, Face recognition,
Memory architecture, Estimation, Gesture recognition
BibRef
Gomaa, A.[Ahmed],
Maier, A.[Andreas],
Kosti, R.[Ronak],
Supervised Contrastive Learning for Robust and Efficient Multi-modal
Emotion and Sentiment Analysis,
ICPR22(2423-2429)
IEEE DOI
2212
Training, Sentiment analysis, Computational modeling,
Predictive models, Transformers, Robustness
BibRef
Liu, H.Y.[Hai-Yang],
Zhu, Z.H.[Zi-Hao],
Iwamoto, N.[Naoya],
Peng, Y.C.[Yi-Chen],
Li, Z.Q.[Zheng-Qing],
Zhou, Y.[You],
Bozkurt, E.[Elif],
Zheng, B.[Bo],
BEAT: A Large-Scale Semantic and Emotional Multi-modal Dataset for
Conversational Gestures Synthesis,
ECCV22(VII:612-630).
Springer DOI
2211
Dataset, Emotions.
BibRef
Chudasama, V.[Vishal],
Kar, P.[Purbayan],
Gudmalwar, A.[Ashish],
Shah, N.[Nirmesh],
Wasnik, P.[Pankaj],
Onoe, N.[Naoyuki],
M2FNet: Multi-modal Fusion Network for Emotion Recognition in
Conversation,
MULA22(4651-4660)
IEEE DOI
2210
Human computer interaction, Emotion recognition, Visualization,
Adaptation models, Benchmark testing, Feature extraction, Robustness
BibRef
Patania, S.[Sabrina],
d'Amelio, A.[Alessandro],
Lanzarotti, R.[Raffaella],
Exploring Fusion Strategies in Deep Multimodal Affect Prediction,
CIAP22(II:730-741).
Springer DOI
2205
BibRef
Wei, G.,
Jian, L.,
Mo, S.,
Multimodal(Audio, Facial and Gesture) based Emotion Recognition
challenge,
FG20(908-911)
IEEE DOI
2102
Emotion recognition, Feature extraction, Face recognition,
Data models, Hidden Markov models, Image recognition
BibRef
Lusquino Filho, L.A.D.,
Oliveira, L.F.R.,
Carneiro, H.C.C.,
Guarisa, G.P.,
Filho, A.L.,
França, F.M.G.,
Lima, P.M.V.,
A weightless regression system for predicting multi-modal empathy,
FG20(657-661)
IEEE DOI
2102
Training, Random access memory,
Mel frequency cepstral coefficient, Predictive models,
regression wisard
BibRef
Shao, J.,
Zhu, J.,
Wei, Y.,
Feng, Y.,
Zhao, X.,
Emotion Recognition by Edge-Weighted Hypergraph Neural Network,
ICIP19(2144-2148)
IEEE DOI
1910
Emotion recognition, edge-weighted hypergraph neural network, multi-modality
BibRef
Guo, J.,
Zhou, S.,
Wu, J.,
Wan, J.,
Zhu, X.,
Lei, Z.,
Li, S.Z.,
Multi-modality Network with Visual and Geometrical Information for
Micro Emotion Recognition,
FG17(814-819)
IEEE DOI
1707
Emotion recognition, Face, Face recognition,
Feature extraction, Geometry, Visualization
BibRef
Wan, J.,
Escalera, S.,
Anbarjafari, G.,
Escalante, H.J.,
Baro, X.,
Guyon, I.,
Madadi, M.,
Allik, J.,
Gorbova, J.,
Lin, C.,
Xie, Y.,
Results and Analysis of ChaLearn LAP Multi-modal Isolated and
Continuous Gesture Recognition, and Real Versus Fake Expressed
Emotions Challenges,
EmotionComp17(3189-3197)
IEEE DOI
1802
Emotion recognition, Feature extraction, Gesture recognition,
Skeleton, Spatiotemporal phenomena, Training
BibRef
Ranganathan, H.,
Chakraborty, S.,
Panchanathan, S.,
Multimodal emotion recognition using deep learning architectures,
WACV16(1-9)
IEEE DOI
1606
Databases
BibRef
Wei, H.L.[Hao-Lin],
Monaghan, D.S.[David S.],
O'Connor, N.E.[Noel E.],
Scanlon, P.[Patricia],
A New Multi-modal Dataset for Human Affect Analysis,
HBU14(42-51).
Springer DOI
1411
Dataset, Human Affect.
BibRef
Chen, S.Z.[Shi-Zhi],
Tian, Y.L.[Ying-Li],
Margin-constrained multiple kernel learning based multi-modal fusion
for affect recognition,
FG13(1-7)
IEEE DOI
1309
face recognition
BibRef
Gajsek, R.[Rok],
Štruc, V.[Vitomir],
Mihelic, F.[France],
Multi-modal Emotion Recognition Using Canonical Correlations and
Acoustic Features,
ICPR10(4133-4136).
IEEE DOI
1008
BibRef
Escalera, S.[Sergio],
Puertas, E.[Eloi],
Radeva, P.I.[Petia I.],
Pujol, O.[Oriol],
Multi-modal laughter recognition in video conversations,
CVPR4HB09(110-115).
IEEE DOI
0906
BibRef
Meghjani, M.[Malika],
Ferrie, F.P.[Frank P.],
Dudek, G.[Gregory],
Bimodal information analysis for emotion recognition,
WACV09(1-6).
IEEE DOI
0912
BibRef
Cohn, J.F., and
Katz, G.S.,
Bimodal Expression of Emotion by Face and Voice,
MMC98(xx-yy),
Workshop on Face/Gesture Recognition and Their Applications.
PDF File.
BibRef
9800
Chen, L.S.,
Huang, T.S.,
Miyasato, T.,
Nakatsu, R.,
Multimodal Human Emotion/Expression Recognition,
AFGR98(366-371).
IEEE DOI
BibRef
9800
Chapter on Face Recognition, Human Pose, Detection, Tracking, Gesture Recognition, Fingerprints, Biometrics continues in
Emotion Recognition, from Other Than Faces .