14.5.7 Multi-Modal Learning

Chapter Contents (Back)
Learning. Multi-Modal Learning. Multimodal Learning. Retrieval:
See also Cross-Modal Indexing, Cross-Modal Retrieval.

Zahalka, J., Rudinac, S.[Stevan], Worring, M.[Marcel],
Interactive Multimodal Learning for Venue Recommendation,
MultMed(17), No. 12, December 2015, pp. 2235-2244.
IEEE DOI 1512
Cities and towns BibRef

Zhang, Q.[Qian], Tian, Y.[Yuan], Yang, Y.P.[Yi-Ping], Pan, C.H.[Chun-Hong],
Automatic Spatial-Spectral Feature Selection for Hyperspectral Image via Discriminative Sparse Multimodal Learning,
GeoRS(53), No. 1, January 2015, pp. 261-279.
IEEE DOI 1410
feature selection BibRef

Kaya, S.[Semih], Vural, E.[Elif],
Learning Multi-Modal Nonlinear Embeddings: Performance Bounds and an Algorithm,
IP(30), 2021, pp. 4384-4394.
IEEE DOI 2104
BibRef
Earlier:
Multi-Modal Learning With Generalizable Nonlinear Dimensionality Reduction,
ICIP19(2139-2143)
IEEE DOI 1910
Training, Kernel, Interpolation, Data models, Geometry, Learning systems, Deep learning, Multi-modal learning, RBF interpolators. Cross-modal learning, multi-view learning, cross-modal retrieval, nonlinear embeddings. BibRef

Liu, C.X.[Chun-Xiao], Mao, Z.D.[Zhen-Dong], Zhang, T.Z.[Tian-Zhu], Liu, A.A.[An-An], Wang, B.[Bin], Zhang, Y.D.[Yong-Dong],
Focus Your Attention: A Focal Attention for Multimodal Learning,
MultMed(24), 2022, pp. 103-115.
IEEE DOI 2202
Semantics, Task analysis, Visualization, Interference, Stacking, Neural networks, Feature extraction, Focal attention, multimodal learning BibRef

Xu, P.[Peng], Zhu, X.T.[Xia-Tian], Clifton, D.A.[David A.],
Multimodal Learning With Transformers: A Survey,
PAMI(45), No. 10, October 2023, pp. 12113-12132.
IEEE DOI 2310
BibRef

Sun, Y.[Ya], Mai, S.[Sijie], Hu, H.F.[Hai-Feng],
Learning to Learn Better Unimodal Representations via Adaptive Multimodal Meta-Learning,
AffCom(14), No. 3, July 2023, pp. 2209-2223.
IEEE DOI 2310
BibRef

Mai, S.[Sijie], Zeng, Y.[Ying], Hu, H.F.[Hai-Feng],
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations,
MultMed(25), 2023, pp. 4121-4134.
IEEE DOI 2310
BibRef

Zeng, Y.[Ying], Mai, S.[Sijie], Yan, W.J.[Wen-Jun], Hu, H.F.[Hai-Feng],
Multimodal Reaction: Information Modulation for Cross-Modal Representation Learning,
MultMed(26), 2024, pp. 2178-2191.
IEEE DOI 2402
Catalysts, Impurities, Representation learning, Purification, Bit error rate, Noise measurement, Information filters, multimodal learning BibRef

Wei, Y.[Yake], Hu, D.[Di], Du, H.H.[Heng-Hui], Wen, J.R.[Ji-Rong],
On-the-Fly Modulation for Balanced Multimodal Learning,
PAMI(47), No. 1, January 2025, pp. 469-485.
IEEE DOI 2412
Training, Visualization, Modulation, Optimization, Data models, Analytical models, Stochastic processes, Multimodal learning, on-the-fly prediction modulation BibRef

Peng, X.K.[Xiao-Kang], Wei, Y.[Yake], Deng, A.D.[An-Dong], Wang, D.[Dong], Hu, D.[Di],
Balanced Multimodal Learning via On-the-fly Gradient Modulation,
CVPR22(8228-8237)
IEEE DOI 2210
Adaptation models, Codes, Gaussian noise, Modulation, Task analysis, Vision+X, Video analysis and understanding BibRef

Li, Y.X.[Yong-Xiang], Qin, Y.[Yang], Sun, Y.[Yuan], Peng, D.Z.[De-Zhong], Peng, X.[Xi], Hu, P.[Peng],
RoMo: Robust Unsupervised Multimodal Learning With Noisy Pseudo Labels,
IP(33), 2024, pp. 5086-5097.
IEEE DOI 2410
Noise measurement, Semantics, Noise, Predictive models, Feature extraction, Annotations, robust discriminant learning BibRef

Zeng, P.X.[Peng-Xin], Li, Y.F.[Yun-Fan], Hu, P.[Peng], Peng, D.Z.[De-Zhong], Lv, J.C.[Jian-Cheng], Peng, X.[Xi],
Deep Fair Clustering via Maximizing and Minimizing Mutual Information: Theory, Algorithm and Metric,
CVPR23(23986-23995)
IEEE DOI 2309

WWW Link. BibRef

Reza, M.K.[Md Kaykobad], Prater-Bennette, A.[Ashley], Asif, M.S.[M. Salman],
Robust Multimodal Learning With Missing Modalities via Parameter-Efficient Adaptation,
PAMI(47), No. 2, February 2025, pp. 742-754.
IEEE DOI 2501
Adaptation models, Training, Computational modeling, Robustness, Modulation, Transforms, Sentiment analysis, Data models, robust multimodal learning BibRef

Lou, Z.Z.[Zheng-Zheng], Xue, H.[Hang], Wang, Y.Z.[Yan-Zheng], Zhang, C.Y.[Chao-Yang], Yang, X.[Xin], Hu, S.Z.[Shi-Zhe],
Parameter-Free Deep Multi-Modal Clustering With Reliable Contrastive Learning,
IP(34), 2025, pp. 2628-2640.
IEEE DOI Code:
WWW Link. 2505
Contrastive learning, Reliability, Feature extraction, Noise, Representation learning, Training, Data mining, Semantics, contrastive learning BibRef

Zong, Y.[Yongshuo], Aodha, O.M.[Oisin Mac], Hospedales, T.M.[Timothy M.],
Self-Supervised Multimodal Learning: A Survey,
PAMI(47), No. 7, July 2025, pp. 5299-5318.
IEEE DOI 2506
Survey, Multi-Modal Learning. Annotations, Task analysis, Data models, Training, Surveys, Reviews, Magnetic heads, Alignment, deep learning, multimodal learning, self-supervised learning BibRef

Ma, M.R.[Meng-Ru], Ma, W.P.[Wen-Ping], Jiao, L.C.[Li-Cheng], Li, L.L.[Ling-Ling], Liu, X.[Xu], Liu, F.[Fang], Yang, S.Y.[Shu-Yuan], Guo, Y.W.[Yu-Wei],
A 3D Self-Awareness Diffusion Network for Multimodal Classification,
MultMed(27), 2025, pp. 3462-3475.
IEEE DOI 2506
Noise reduction, Feature extraction, Diffusion models, Solid modeling, Remote sensing, Brain modeling, Image fusion, remote sensing images BibRef


Li, M.S.[Ming-Sheng], Chen, X.[Xin], Zhang, C.[Chi], Chen, S.J.[Si-Jin], Zhu, H.Y.[Hong-Yuan], Yin, F.[Fukun], Li, Z.Y.[Zhuo-Yuan], Yu, G.[Gang], Chen, T.[Tao],
M3dbench: Towards Omni 3d Assistant with Interleaved Multi-modal Instructions,
ECCV24(LVIII: 41-59).
Springer DOI 2412
BibRef

Gao, J.[Jin], Gan, L.[Lei], Li, Y.[Yuankai], Ye, Y.X.[Yi-Xin], Wang, D.[Dequan],
Dissecting Dissonance: Benchmarking Large Multimodal Models Against Self-contradictory Instructions,
ECCV24(LVII: 404-420).
Springer DOI 2412
BibRef

Kapoor, R.[Raghav], Butala, Y.P.[Yash Parag], Russak, M.[Melisa], Koh, J.Y.[Jing Yu], Kamble, K.[Kiran], Al Shikh, W.[Waseem], Salakhutdinov, R.[Ruslan],
Omniact: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web,
ECCV24(LXVIII: 161-178).
Springer DOI 2412
BibRef

Shao, C.Z.[Cong-Zhang], Luo, G.Y.[Gui-Yang], Yuan, Q.[Quan], Chen, Y.[Yifu], Liu, Y.L.[Yi-Lin], Gong, K.[Kexin], Li, J.L.[Jing-Lin],
Hetecooper: Feature Collaboration Graph for Heterogeneous Collaborative Perception,
ECCV24(LIV: 162-178).
Springer DOI 2412
BibRef

Nedungadi, V.[Vishal], Kariryaa, A.[Ankit], Oehmcke, S.[Stefan], Belongie, S.[Serge], Igel, C.[Christian], Lang, N.[Nico],
MMEARTH: Exploring Multi-modal Pretext Tasks for Geospatial Representation Learning,
ECCV24(LXIV: 164-182).
Springer DOI 2412
BibRef

Xue, L.[Le], Yu, N.[Ning], Zhang, S.[Shu], Panagopoulou, A.[Artemis], Li, J.[Junnan], Martín-Martín, R.[Roberto], Wu, J.J.[Jia-Jun], Xiong, C.M.[Cai-Ming], Xu, R.[Ran], Niebles, J.C.[Juan Carlos], Savarese, S.[Silvio],
ULIP-2: Towards Scalable Multimodal Pre-Training for 3D Understanding,
CVPR24(27081-27091)
IEEE DOI Code:
WWW Link. 2410
Representation learning, Training, Point cloud compression, Solid modeling, Shape, Annotations, 3D vision, Multimodal learning BibRef

Wei, S.[Shicai], Luo, Y.[Yang], Wang, Y.[Yuji], Luo, C.[Chunbo],
Robust Multimodal Learning via Representation Decoupling,
ECCV24(XLII: 38-54).
Springer DOI 2412
Code:
WWW Link. BibRef

Wei, Y.[Yake], Li, S.W.[Si-Wei], Feng, R.X.[Ruo-Xuan], Hu, D.[Di],
Diagnosing and Re-learning for Balanced Multimodal Learning,
ECCV24(LXIV: 71-86).
Springer DOI 2412
BibRef

Kim, D.G.[Dong-Geun], Kim, T.[Taesup],
Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models,
ECCV24(LXXXVI: 171-187).
Springer DOI 2412
BibRef

Swetha, S.[Sirnam], Rizve, M.N.[Mamshad Nayeem], Shvetsova, N.[Nina], Kuehne, H.[Hilde], Shah, M.[Mubarak],
Preserving Modality Structure Improves Multi-Modal Learning,
ICCV23(21936-21946)
IEEE DOI Code:
WWW Link. 2401
BibRef

Wang, H.[Hu], Chen, Y.H.[Yuan-Hong], Ma, C.[Congbo], Avery, J.[Jodie], Hull, L.[Louise], Carneiro, G.[Gustavo],
Multi-Modal Learning with Missing Modality via Shared-Specific Feature Modelling,
CVPR23(15878-15887)
IEEE DOI 2309
BibRef

Kim, E., Kang, W.Y., On, K., Heo, Y., Zhang, B.,
Hypergraph Attention Networks for Multimodal Learning,
CVPR20(14569-14578)
IEEE DOI 2008
Semantics, Visualization, Task analysis, Knowledge discovery, Message passing, Computational modeling, Biological neural networks BibRef

Tian, L., Hong, X., Fan, C., Ming, Y., Pietikäinen, M., Zhao, G.,
Sparse Tikhonov-Regularized Hashing for Multi-Modal Learning,
ICIP18(3793-3797)
IEEE DOI 1809
Feature extraction, Linear programming, Encoding, Testing, Task analysis, Stability analysis, Optimization, L0norm Sparsity Constraint BibRef

Kim, J.[Jaekyum], Koh, J.[Junho], Kim, Y.[Yecheol], Choi, J.[Jaehyung], Hwang, Y.[Youngbae], Choi, J.W.[Jun Won],
Robust Deep Multi-modal Learning Based on Gated Information Fusion Network,
ACCV18(IV:90-106).
Springer DOI 1906
BibRef

Huang, Y.[Yan], Wang, W.[Wei], Wang, L.[Liang],
Instance-Aware Image and Sentence Matching with Selective Multimodal LSTM,
CVPR17(7254-7262)
IEEE DOI 1711
Aggregates, Detectors, Image color analysis, Roads BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Learning Model Descriptions .


Last update:Jul 7, 2025 at 14:35:55