Zadeh, A.,
Zellers, R.,
Pincus, E.,
Morency, L.P.[Louis-Philippe],
Multimodal Sentiment Intensity Analysis in Videos:
Facial Gestures and Verbal Messages,
IEEE_Int_Sys(31), No. 6, November 2016, pp. 82-88.
IEEE DOI
1612
Feature extraction
BibRef
Soleymani, M.[Mohammad],
Garcia, D.[David],
Jou, B.[Brendan],
Schuller, B.[Björn],
Chang, S.F.[Shih-Fu],
Pantic, M.[Maja],
A survey of multimodal sentiment analysis,
IVC(65), No. 1, 2017, pp. 3-14.
Elsevier DOI
1709
Sentiment
BibRef
Zhang, X.,
Gao, X.,
Lu, W.,
He, L.,
A Gated Peripheral-Foveal Convolutional Neural Network for Unified
Image Aesthetic Prediction,
MultMed(21), No. 11, November 2019, pp. 2815-2826.
IEEE DOI
1911
Feature extraction, Logic gates, Visualization,
Convolutional neural networks, Task analysis, Deep learning,
deep learning
BibRef
Zhang, X.,
Gao, X.,
Lu, W.,
He, L.,
Li, J.,
Beyond Vision: A Multimodal Recurrent Attention Convolutional Neural
Network for Unified Image Aesthetic Prediction Tasks,
MultMed(23), 2021, pp. 611-623.
IEEE DOI
2102
convolutional neural nets, feature extraction,
image classification, image enhancement, image fusion,
deep learning
BibRef
Huang, S.,
Cornelis, B.,
Devolder, B.,
Martens, M.,
Pizurica, A.,
Multimodal Target Detection by Sparse Coding:
Application to Paint Loss Detection in Paintings,
IP(29), 2020, pp. 7681-7696.
IEEE DOI
2007
Sparse representation, target detection, paint loss, kernel,
multiple imaging modalities
BibRef
Kuang, Q.,
Jin, X.,
Zhao, Q.,
Zhou, B.,
Deep Multimodality Learning for UAV Video Aesthetic Quality
Assessment,
MultMed(22), No. 10, October 2020, pp. 2623-2634.
IEEE DOI
2009
Quality assessment, Cameras, Feature extraction, Photography, Drones,
Streaming media, Aesthetic quality assessment,
deep multimodality learning
BibRef
Wen, H.L.[Huang-Lu],
You, S.D.[Shao-Di],
Fu, Y.[Ying],
Cross-modal context-gated convolution for multi-modal sentiment
analysis,
PRL(146), 2021, pp. 252-259.
Elsevier DOI
2105
Artificial neural networks,
Affective behavior, Multi-modal temporal sequences
BibRef
Wen, H.L.[Huang-Lu],
You, S.D.[Shao-Di],
Fu, Y.[Ying],
Cross-modal dynamic convolution for multi-modal emotion recognition,
JVCIR(78), 2021, pp. 103178.
Elsevier DOI
2107
Artificial neural networks,
Affective behavior, Multi-modal temporal sequences.
BibRef
He, J.X.[Jia-Xuan],
Mai, S.[Sijie],
Hu, H.F.[Hai-Feng],
A Unimodal Reinforced Transformer With Time Squeeze Fusion for
Multimodal Sentiment Analysis,
SPLetters(28), 2021, pp. 992-996.
IEEE DOI
2106
Sparse matrices, Sentiment analysis, Fuses, Convolution, Kernel,
Analytical models, Visualization, Time squeeze fusion,
multimodal sentiment analysis
BibRef
Peng, W.[Wei],
Hong, X.P.[Xiao-Peng],
Zhao, G.Y.[Guo-Ying],
Adaptive Modality Distillation for Separable Multimodal Sentiment
Analysis,
IEEE_Int_Sys(36), No. 3, May 2021, pp. 82-89.
IEEE DOI
2107
Tensors, Sentiment analysis, Task analysis, Intelligent systems,
Computational modeling, Affective computing, Training data
BibRef
Wang, L.J.[Li-Juan],
Guo, W.[Wenya],
Yao, X.X.[Xing-Xu],
Zhang, Y.X.[Yu-Xiang],
Yang, J.F.[Ju-Feng],
Multimodal Event-Aware Network for Sentiment Analysis in Tourism,
MultMedMag(28), No. 2, April 2021, pp. 49-58.
IEEE DOI
2107
Feature extraction, Blogs, Sentiment analysis, Visualization,
Task analysis, Semantics, Delays
BibRef
Xu, N.[Nan],
Mao, W.J.[Wen-Ji],
Wei, P.H.[Peng-Hui],
Zeng, D.[Daniel],
MDA: Multimodal Data Augmentation Framework for Boosting Performance
on Sentiment/Emotion Classification Tasks,
IEEE_Int_Sys(36), No. 6, November 2021, pp. 3-12.
IEEE DOI
2112
Task analysis, Data analysis, Boosting, Social networking (online),
Annotations, Sentiment analysis, Automation, Data augmentation,
multimodal classification
BibRef
He, J.X.[Jia-Xuan],
Hu, H.F.[Hai-Feng],
MF-BERT: Multimodal Fusion in Pre-Trained BERT for Sentiment Analysis,
SPLetters(29), 2022, pp. 454-458.
IEEE DOI
2202
Bit error rate, Visualization, Acoustics, Sentiment analysis,
Analytical models, Fuses, Convolution, Internal updating,
multimodal fusion BERT
BibRef
Chen, R.[Rongfei],
Zhou, W.J.[Wen-Ju],
Li, Y.[Yang],
Zhou, H.Y.[Hui-Yu],
Video-Based Cross-Modal Auxiliary Network for Multimodal Sentiment
Analysis,
CirSysVideo(32), No. 12, December 2022, pp. 8703-8716.
IEEE DOI
2212
Feature extraction, Acoustics, Emotion recognition,
Sentiment analysis, Spectrogram, Visualization, Speech recognition,
emotion recognition
BibRef
Wang, D.[Di],
Guo, X.T.[Xu-Tong],
Tian, Y.M.[Yu-Min],
Liu, J.H.[Jin-Hui],
He, L.H.[Li-Huo],
Luo, X.M.[Xue-Mei],
TETFN: A text enhanced transformer fusion network for multimodal
sentiment analysis,
PR(136), 2023, pp. 109259.
Elsevier DOI
2301
Multimodal sentiment analysis, Transformer,
Text-oriented pairwise cross-modal mappings
BibRef
Tang, J.J.[Jia-Jia],
Liu, D.J.[Dong-Jun],
Jin, X.Y.[Xuan-Yu],
Peng, Y.[Yong],
Zhao, Q.B.[Qi-Bin],
Ding, Y.[Yu],
Kong, W.Z.[Wan-Zeng],
BAFN: Bi-Direction Attention Based Fusion Network for Multimodal
Sentiment Analysis,
CirSysVideo(33), No. 4, April 2023, pp. 1966-1978.
IEEE DOI
2304
Bidirectional control, Sentiment analysis,
Termination of employment, Task analysis, Routing, Redundancy,
attention mechanism
BibRef
Dudzik, B.[Bernd],
Hung, H.[Hayley],
Neerincx, M.[Mark],
Broekens, J.[Joost],
Collecting Mementos: A Multimodal Dataset for Context-Sensitive
Modeling of Affect and Memory Processing in Responses to Videos,
AffCom(14), No. 2, April 2023, pp. 1249-1266.
IEEE DOI
2306
Videos, Media, Computational modeling, Films, Particle measurements,
Mood, Atmospheric measurements, Multimodal dataset,
personalization
BibRef
Das, R.K.[Ring-Ki],
Singh, T.D.[Thoudam Doren],
Multimodal Sentiment Analysis: A Survey of Methods, Trends, and
Challenges,
Surveys(55), No. 13s, July 2023, pp. xx-yy.
DOI Link
2309
Survey, Sentiment. audio sentiment analysis, image sentiment analysis,
text sentiment analysis, Multimodal sentiment analysis, transfer learning
BibRef
Zhu, T.[Tong],
Li, L.[Leida],
Yang, J.F.[Ju-Feng],
Zhao, S.C.[Si-Cheng],
Liu, H.T.[Han-Tao],
Qian, J.S.[Jian-Sheng],
Multimodal Sentiment Analysis With Image-Text Interaction Network,
MultMed(25), 2023, pp. 3375-3385.
IEEE DOI
2309
BibRef
Yu, J.F.[Jian-Fei],
Chen, K.[Kai],
Xia, R.[Rui],
Hierarchical Interactive Multimodal Transformer for Aspect-Based
Multimodal Sentiment Analysis,
AffCom(14), No. 3, July 2023, pp. 1966-1978.
IEEE DOI
2310
BibRef
Mai, S.[Sijie],
Zeng, Y.[Ying],
Zheng, S.J.[Shuang-Jia],
Hu, H.F.[Hai-Feng],
Hybrid Contrastive Learning of Tri-Modal Representation for
Multimodal Sentiment Analysis,
AffCom(14), No. 3, July 2023, pp. 2276-2289.
IEEE DOI
2310
BibRef
Lin, R.H.[Rong-Hao],
Hu, H.F.[Hai-Feng],
Dynamically Shifting Multimodal Representations via Hybrid-Modal
Attention for Multimodal Sentiment Analysis,
MultMed(26), 2024, pp. 2740-2755.
IEEE DOI
2402
Transformers, Acoustics, Visualization, Feature extraction,
Task analysis, Logic gates, Sentiment analysis,
hybrid-modal attention
BibRef
Katada, S.[Shun],
Okada, S.[Shogo],
Komatani, K.[Kazunori],
Effects of Physiological Signals in Different Types of Multimodal
Sentiment Estimation,
AffCom(14), No. 3, July 2023, pp. 2443-2457.
IEEE DOI
2310
BibRef
Zeng, J.D.[Jian-Dian],
Zhou, J.T.[Jian-Tao],
Liu, T.Y.[Tian-Yi],
Robust Multimodal Sentiment Analysis via Tag Encoding of Uncertain
Missing Modalities,
MultMed(25), 2023, pp. 6301-6314.
IEEE DOI
2311
BibRef
Wang, D.[Di],
Liu, S.[Shuai],
Wang, Q.[Quan],
Tian, Y.M.[Yu-Min],
He, L.[Lihuo],
Gao, X.B.[Xin-Bo],
Cross-Modal Enhancement Network for Multimodal Sentiment Analysis,
MultMed(25), 2023, pp. 4909-4921.
IEEE DOI
2311
BibRef
Ye, M.[Mang],
Shi, Q.H.Y.[Qing-Hong-Ya],
Su, K.[Kehua],
Du, B.[Bo],
Cross-Modality Pyramid Alignment for Visual Intention Understanding,
IP(32), 2023, pp. 2190-2201.
IEEE DOI
2305
Exploring the potential and underlying meaning expressed in images.
Visualization, Task analysis, Semantics, Feature extraction,
Training, Image segmentation, Image color analysis,
hierarchical relation
BibRef
Liu, H.[Huan],
Li, K.[Ke],
Fan, J.P.[Jian-Ping],
Yan, C.X.[Cai-Xia],
Qin, T.[Tao],
Zheng, Q.H.[Qing-Hua],
Social Image-Text Sentiment Classification With Cross-Modal
Consistency and Knowledge Distillation,
AffCom(14), No. 4, October 2023, pp. 3332-3344.
IEEE DOI
2312
BibRef
Li, M.C.[Ming-Cheng],
Yang, D.K.[Ding-Kang],
Zhang, L.H.[Li-Hua],
Towards Robust Multimodal Sentiment Analysis Under Uncertain Signal
Missing,
SPLetters(30), 2023, pp. 1497-1501.
IEEE DOI
2311
BibRef
Zhao, X.B.[Xian-Bing],
Chen, Y.X.[Yin-Xin],
Liu, S.[Sicen],
Tang, B.[Buzhou],
Shared-Private Memory Networks For Multimodal Sentiment Analysis,
AffCom(14), No. 4, October 2023, pp. 2889-2900.
IEEE DOI Code:
WWW Link.
2312
BibRef
Cheng, H.J.[Hong-Ju],
Yang, Z.Z.[Zi-Zhen],
Zhang, X.Q.[Xiao-Qi],
Yang, Y.[Yang],
Multimodal Sentiment Analysis Based on Attentional Temporal
Convolutional Network and Multi-Layer Feature Fusion,
AffCom(14), No. 4, October 2023, pp. 3149-3163.
IEEE DOI
2312
BibRef
He, L.J.[Li-Jun],
Wang, Z.Q.[Zi-Qing],
Wang, L.[Liejun],
Li, F.[Fan],
Multimodal Mutual Attention-Based Sentiment Analysis Framework
Adapted to Complicated Contexts,
CirSysVideo(33), No. 12, December 2023, pp. 7131-7143.
IEEE DOI Code:
WWW Link.
2312
BibRef
Yuan, Z.Q.[Zi-Qi],
Liu, Y.[Yihe],
Xu, H.[Hua],
Gao, K.[Kai],
Noise Imitation Based Adversarial Training for Robust Multimodal
Sentiment Analysis,
MultMed(26), 2024, pp. 529-539.
IEEE DOI
2402
Training, Noise measurement, Visualization, Sentiment analysis,
Robustness, Feature extraction, Data models,
semantic reconstruction
BibRef
Wang, D.[Di],
Tian, C.[Changning],
Liang, X.[Xiao],
Zhao, L.[Lin],
He, L.[Lihuo],
Wang, Q.[Quan],
Dual-Perspective Fusion Network for Aspect-Based Multimodal Sentiment
Analysis,
MultMed(26), 2024, pp. 4028-4038.
IEEE DOI
2402
Sentiment analysis, Task analysis, Data mining, Semantics,
Syntactics, Feature extraction, Visualization, graph neural network
BibRef
Qian, F.[Fan],
Han, J.Q.[Ji-Qing],
Guan, Y.D.[Ya-Dong],
Song, W.J.[Wen-Jie],
He, Y.J.[Yong-Jun],
Capturing High-Level Semantic Correlations via Graph for Multimodal
Sentiment Analysis,
SPLetters(31), 2024, pp. 561-565.
IEEE DOI
2402
Semantics, Routing, Correlation, Feature extraction, Visualization,
Self-supervised learning, Videos, Multimodal sentiment analysis,
high-level semantic correlations
BibRef
Sun, L.[Licai],
Lian, Z.[Zheng],
Liu, B.[Bin],
Tao, J.H.[Jian-Hua],
Efficient Multimodal Transformer With Dual-Level Feature Restoration
for Robust Multimodal Sentiment Analysis,
AffCom(15), No. 1, January 2024, pp. 309-325.
IEEE DOI
2403
Transformers, Robustness, Semantics, Data models,
Computational modeling, Videos, Training, robustness
BibRef
Huan, R.H.[Ruo-Hong],
Zhong, G.W.[Guo-Wei],
Chen, P.[Peng],
Liang, R.H.[Rong-Hua],
UniMF: A Unified Multimodal Framework for Multimodal Sentiment
Analysis in Missing Modalities and Unaligned Multimodal Sequences,
MultMed(26), 2024, pp. 5753-5768.
IEEE DOI
2404
Transformers, Sentiment analysis, Fuses, Training, Task analysis,
Transformer cores, Semantics, Attention mechanism,
unaligned multimodal sequences
BibRef
Song, L.Y.[Ling-Yun],
Chen, S.[Siyu],
Meng, Z.Y.[Zi-Yang],
Sun, M.X.[Ming-Xuan],
Shang, X.[Xuequn],
FMSA-SC: A Fine-Grained Multimodal Sentiment Analysis Dataset Based
on Stock Comment Videos,
MultMed(26), 2024, pp. 7294-7306.
IEEE DOI
2405
Videos, Stock markets, Annotations, Task analysis, Acoustics,
Visualization, Web sites, Multimedia databases, neural networks,
video signal processing
BibRef
Yuan, Z.Q.[Zi-Qi],
Zhang, B.Z.[Bao-Zheng],
Xu, H.[Hua],
Gao, K.[Kai],
Meta Noise Adaption Framework for Multimodal Sentiment Analysis With
Feature Noise,
MultMed(26), 2024, pp. 7265-7277.
IEEE DOI
2405
Noise measurement, Task analysis, Training, Metalearning,
Sentiment analysis, Adaptation models, Visualization,
robust multimodal sentiment analysis
BibRef
Singh, U.[Upendra],
Abhishek, K.[Kumar],
Azad, H.K.[Hiteshwar Kumar],
A Survey of Cutting-edge Multimodal Sentiment Analysis,
Surveys(56), No. 9, April 2024, pp. 227.
DOI Link
2405
Survey, Sentinment. Multimodal sentiment analysis, sentiment classifier,
machine learning, emotion detection, modelling techniques
BibRef
Lin, R.H.[Rong-Hao],
Hu, H.F.[Hai-Feng],
Multi-Task Momentum Distillation for Multimodal Sentiment Analysis,
AffCom(15), No. 2, April 2024, pp. 549-565.
IEEE DOI
2406
Task analysis, Multitasking, Knowledge engineering,
Sentiment analysis, Feature extraction, Visualization, Acoustics,
multimodal sentiment analysis
BibRef
Ji, X.Y.[Xiao-Yue],
Dong, Z.[Zhekang],
Zhou, G.[Guangdong],
Lai, C.S.[Chun Sing],
Qi, D.L.[Dong-Lian],
MLG-NCS: Multimodal Local-Global Neuromorphic Computing System for
Affective Video Content Analysis,
SMCS(54), No. 8, August 2024, pp. 5137-5149.
IEEE DOI
2408
Memristors, Iron, Electrodes, Training, Sputtering,
Neuromorphic engineering, Low latency communication,
neuromorphic computing system (NCS)
BibRef
Huang, J.[Jian],
Ji, Y.L.[Yan-Li],
Qin, Z.[Zhen],
Yang, Y.[Yang],
Shen, H.T.[Heng Tao],
Dominant SIngle-Modal SUpplementary Fusion (SIMSUF) for Multimodal
Sentiment Analysis,
MultMed(26), 2024, pp. 8383-8394.
IEEE DOI
2408
Transformers, Sentiment analysis, Semantics, Task analysis, Fuses,
Feature extraction, Representation learning, Multimodal fusion, transformer
BibRef
Xie, Z.Y.[Zhu-Yang],
Yang, Y.[Yan],
Wang, J.[Jie],
Liu, X.R.[Xiao-Rong],
Li, X.F.[Xiao-Fan],
Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal
Sentiment Space,
CirSysVideo(34), No. 8, August 2024, pp. 7657-7670.
IEEE DOI
2408
Uncertainty, Sentiment analysis, Estimation, Feature extraction,
Reliability, Task analysis, Data models, ordinal regression
BibRef
Li, M.[Meng],
Zhu, Z.F.[Zhen-Fang],
Li, K.[Kefeng],
Zhou, L.H.[Li-Hua],
Zhao, Z.[Zhen],
Pei, H.L.[Hong-Li],
Joint training strategy of unimodal and multimodal for multimodal
sentiment analysis,
IVC(149), 2024, pp. 105172.
Elsevier DOI
2408
Multimodal sentiment analysis, Multimodal fusion, Multimodal interaction
BibRef
Zijun, W.[Wang],
Naicheng, J.[Jiang],
Xinyue, C.[Chao],
Bin, S.[Sun],
Multi-task disagreement-reducing multimodal sentiment fusion network,
IVC(149), 2024, pp. 105158.
Elsevier DOI
2408
Multimodal sentiment analysis, Multimodal fusion,
Sentiment disagreement, Multi-task learning
BibRef
Liu, Z.J.[Zi-Jun],
Cai, L.[Li],
Yang, W.J.[Wen-Jie],
Liu, J.H.[Jun-Hui],
Sentiment analysis based on text information enhancement and
multimodal feature fusion,
PR(156), 2024, pp. 110847.
Elsevier DOI
2408
Sentiment analysis, Text information enhancement,
Multimodal data fusion, Cross-modal attention mechanism, Sentiment lexicons
BibRef
Wang, Q.L.[Qian-Long],
Xu, H.L.[Hong-Ling],
Wen, Z.Y.[Zhi-Yuan],
Liang, B.[Bin],
Yang, M.[Min],
Qin, B.[Bing],
Xu, R.F.[Rui-Feng],
Image-to-Text Conversion and Aspect-Oriented Filtration for
Multimodal Aspect-Based Sentiment Analysis,
AffCom(15), No. 3, July 2024, pp. 1264-1278.
IEEE DOI
2409
Sentiment analysis, Visualization, Task analysis,
Social networking (online), Filtration, Analytical models,
pre-trained language model
BibRef
Sharma, S.[Shivam],
Ramaneswaran, S.,
Akhtar, M.S.[Md. Shad],
Chakraborty, T.[Tanmoy],
Emotion-Aware Multimodal Fusion for Meme Emotion Detection,
AffCom(15), No. 3, July 2024, pp. 1800-1811.
IEEE DOI
2409
Task analysis, Emotion recognition, Social networking (online),
Visualization, Mood, Affective computing, Internet, Emotion analysis,
social media
BibRef
Zhang, B.Z.[Bao-Zheng],
Yuan, Z.Q.[Zi-Qi],
Xu, H.[Hua],
Gao, K.[Kai],
Crossmodal Translation Based Meta Weight Adaption for Robust
Image-Text Sentiment Analysis,
MultMed(26), 2024, pp. 9949-9961.
IEEE DOI
2410
Robustness, Task analysis, Sentiment analysis, Semantics,
Metalearning, Representation learning,
robustness and reliability
BibRef
Xie, S.F.[Shu-Fan],
Chen, Q.H.[Qiao-Hong],
Fang, X.[Xian],
Sun, Q.[Qi],
Global information regulation network for multimodal sentiment
analysis,
IVC(151), 2024, pp. 105297.
Elsevier DOI
2411
Multimodal sentiment analysis, Gate mechanism,
Unsupervised learning, Contrastive learning
BibRef
Liu, W.C.[Wu-Chao],
Li, W.G.[Wen-Gen],
Ruan, Y.P.[Yu-Ping],
Shu, Y.[Yulou],
Chen, J.T.[Jun-Tao],
Li, Y.[Yina],
Yu, C.[Caili],
Zhang, Y.C.[Yi-Chao],
Guan, J.H.[Ji-Hong],
Zhou, S.[Shuigeng],
Weakly Correlated Multimodal Sentiment Analysis:
New Dataset and Topic-Oriented Model,
AffCom(15), No. 4, October 2024, pp. 2070-2082.
IEEE DOI
2412
Sentiment analysis, Social networking (online), Reviews,
Analytical models, Correlation, Visualization, Blogs,
weak correlation
BibRef
Zhang, T.[Ting],
Song, B.[Bin],
Zhang, Z.Y.[Zhi-Yong],
Zhang, Y.J.[Ya-Juan],
Multimodal sentiment analysis based on multi-stage graph fusion
networks under random missing modality conditions,
IET-IPR(19), No. 1, 2025, pp. e13310.
DOI Link
2501
missing modality, multimodal fusion,
multimodal sentiment analysis, transformer
BibRef
Zou, W.[Wang],
Sun, X.[Xia],
Lu, Q.[Qiang],
Wang, X.[Xuxin],
Feng, J.[Jun],
A vision and language hierarchical alignment for multimodal
aspect-based sentiment analysis,
PR(162), 2025, pp. 111369.
Elsevier DOI
2503
Multimodal aspect-based sentiment analysis,
Visual scene graph, Text dependency graph, Dynamic alignment matrix
BibRef
Fan, C.[Cunhang],
Zhu, K.[Kang],
Tao, J.H.[Jian-Hua],
Yi, G.F.[Guo-Feng],
Xue, J.[Jun],
Lv, Z.[Zhao],
Multi-Level Contrastive Learning: Hierarchical Alleviation of
Heterogeneity in Multimodal Sentiment Analysis,
AffCom(16), No. 1, January 2025, pp. 207-222.
IEEE DOI
2503
Feature extraction, Contrastive learning, Semantics, Vectors,
Convolution, TV, Sentiment analysis, Multimodal sentiment analysis,
heterogeneity
BibRef
Li, M.[Meng],
Zhu, Z.F.[Zhen-Fang],
Li, K.[Kefeng],
Pei, H.L.[Hong-Li],
Diversity and Balance: Multimodal Sentiment Analysis Using
Multimodal-Prefixed and Cross-Modal Attention,
AffCom(16), No. 1, January 2025, pp. 250-263.
IEEE DOI
2503
Data models, Sentiment analysis, Visualization, Task analysis,
Analytical models, Acoustics, Transformers,
cross-modal attention
BibRef
Wang, Q.L.[Qian-Long],
Wen, Z.Y.[Zhi-Yuan],
Ding, K.Y.[Ke-Yang],
Liang, B.[Bin],
Xu, R.F.[Rui-Feng],
Cross-Domain Sentiment Analysis via Disentangled Representation and
Prototypical Learning,
AffCom(16), No. 1, January 2025, pp. 264-276.
IEEE DOI
2503
Sentiment analysis, Reviews, Training, Task analysis,
Feature extraction, Affective computing, Semantics,
prototypical learning
BibRef
Zhang, Q.G.[Qion-Gan],
Shi, L.[Lei],
Liu, P.[Peiyu],
Zhu, Z.F.[Zhen-Fang],
Xu, L.C.[Lian-Cheng],
IMCN: Identifying Modal Contribution Network for Multimodal Sentiment
Analysis,
ICPR22(4729-4735)
IEEE DOI
2212
Sentiment analysis, Visualization, Analytical models,
Noise reduction, Benchmark testing, Acoustics, modality contribution
BibRef
Zhong, Q.[Qi],
Wang, Q.[Qian],
Liu, J.[Ji],
Combining Knowledge and Multi-modal Fusion for Meme Classification,
MMMod22(I:599-611).
Springer DOI
2203
Sentinment and offensive.
BibRef
Wang, B.Q.[Bin-Qiang],
Dong, G.[Gang],
Zhao, Y.Q.[Ya-Qian],
Li, R.G.[Ren-Gang],
Cao, Q.C.[Qi-Chun],
Chao, Y.Y.[Yin-Yin],
Non-Uniform Attention Network for Multi-modal Sentiment Analysis,
MMMod22(I:612-623).
Springer DOI
2203
BibRef
Patro, B.N.[Badri N.],
Lunayach, M.[Mayank],
Srivastava, D.[Deepankar],
Sarvesh, S.[Sarvesh],
Singh, H.[Hunar],
Namboodiri, V.P.[Vinay P.],
Multimodal Humor Dataset: Predicting Laughter tracks for Sitcoms,
WACV21(576-585)
IEEE DOI
WWW Link.
2106
Dataset, Humor. Annotations, Semantics, Bit error rate,
Manuals, Task analysis
BibRef
Tashu, T.M.[Tsegaye Misikir],
Horváth, T.[Tomáš],
Attention-based Multi-Modal Emotion Recognition from Art,
FAPER20(604-612).
Springer DOI
2103
BibRef
Garcia, N.[Noa],
Vogiatzis, G.[George],
How to Read Paintings: Semantic Art Understanding with Multi-modal
Retrieval,
CVAA18(II:676-691).
Springer DOI
1905
BibRef
Ullah, M.A.,
Islam, M.M.,
Azman, N.B.,
Zaki, Z.M.,
An overview of Multimodal Sentiment Analysis research:
Opportunities and Difficulties,
IVPR17(1-6)
IEEE DOI
1704
Face
BibRef
Nemati, S.,
Naghsh-Nilchi, A.R.,
Exploiting evidential theory in the fusion of textual, audio, and
visual modalities for affective music video retrieval,
IPRIA17(222-228)
IEEE DOI
1712
emotion recognition, image fusion, inference mechanisms,
sentiment analysis, social networking (online),
Lexicon-based sentiment analysis
BibRef
Chapter on 3-D Object Description and Computation Techniques, Surfaces, Deformable, View Generation, Video Conferencing continues in
Rendering Specific Surfaces, Applied Rendering .