Journals starting with abaw

ABAW21 * *Affective Behavior Analysis In-the-Wild
* Analysing Affective Behavior in the second ABAW2 Competition
* audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild, An
* Causal affect prediction model using a past facial image sequence
* Continuous Emotion Recognition with Audio-visual Leader-follower Attentive Fusion
* Emotion Recognition Based on Body and Context Fusion in the Wild
* Emotion Recognition With Sequential Multi-task Learning Technique
* Evaluating the Performance of Ensemble Methods and Voting Strategies for Dense 2D Pedestrian Detection in the Wild
* FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition
* Iterative Distillation for Better Uncertainty Estimates in Multitask Emotion Recognition
* MTMSN: Multi-Task and Multi-Modal Sequence Network for Facial Action Unit and Expression Recognition
* Multi-task Mean Teacher for Semi-supervised Facial Affective Behavior Analysis, A
* Multitask Multi-database Emotion Recognition
* Noisy Annotations Robust Consensual Collaborative Affect Expression Recognition
* Prior Aided Streaming Network for Multi-task Affective Analysis
* Public Life in Public Space (PLPS): A multi-task, multi-group video dataset for public life research
* Student Engagement Dataset
17 for ABAW21

ABAW22 * *Affective Behavior Analysis In-the-Wild
* ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges
* Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation
* Action unit detection by exploiting spatial-temporal and label-wise attention with transformer
* Attention-based Method for Multi-label Facial Action Unit Detection, An
* Best of Both Worlds: Combining Model-based and Nonparametric Approaches for 3D Human Body Estimation, The
* Bridging the Gap Between Automated and Human Facial Emotion Perception
* Classification of Facial Expression In-the-Wild based on Ensemble of Multi-head Cross Attention Networks
* Coarse-to-Fine Cascaded Networks with Smooth Predicting for Video Facial Expression Recognition
* Continuous Emotion Recognition using Visual-audio-linguistic Information: A Technical Report for ABAW3
* Cross Transferring Activity Recognition to Word Level Sign Language Detection
* Ensemble Approach for Facial Behavior Analysis in-the-wild Video, An
* Estimating Multiple Emotion Descriptors by Separating Description and Inference
* Facial Expression Classification using Fusion of Deep Neural Network in Video
* Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition, A
* Long-term Action Forecasting Using Multi-headed Attention-based Variational Recurrent Neural Networks
* MixAugment & Mixup: Augmentation Methods for Facial Expression Recognition
* Model Level Ensemble for Facial Action Unit Recognition at the 3rd ABAW Challenge
* Multi-task Learning for Human Affect Prediction with Auditory-Visual Synchronized Representation
* NeuralAnnot: Neural Annotator for 3D Human Mesh Training Sets
* Three Stream Graph Attention Network using Dynamic Patch Selection for the classification of micro-expressions
* TikTok for good: Creating a diverse emotion expression database
* Time-Continuous Audiovisual Fusion with Recurrence vs Attention for In-The-Wild Affect Recognition
* Transformer-based Multimodal Information Fusion for Facial Expression Analysis
* Valence and Arousal Estimation based on Multimodal Temporal-Aware Features for Videos in the Wild
* Video-based Frame-level Facial Analysis of Affective Behavior on Mobile Devices using EfficientNets
* Video-based multimodal spontaneous emotion recognition using facial expressions and physiological signals
27 for ABAW22

ABAW23 * *Affective Behavior Analysis In-the-Wild
* ABAW5 Challenge: A Facial Affect Recognition Approach Utilizing Transformer Encoder and Audiovisual Fusion
* ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection and Emotional Reaction Intensity Estimation Challenges
* Analysis of Emotion Annotation Strength Improves Generalization in Speech Emotion Recognition Models
* Compound Expression Recognition In-the-wild with AU-assisted Meta Multi-task Learning
* Dual Branch Network for Emotional Reaction Intensity Estimation, A
* Dynamic Noise Injection for Facial Expression Recognition In-the-Wild
* EmotiEffNets for Facial Processing in Video-based Valence-Arousal Prediction, Expression Classification and Action Unit Detection
* Ensemble Spatial and Temporal Vision Transformer for Action Units Detection
* EVAEF: Ensemble Valence-Arousal Estimation Framework in the Wild
* Exploring Expression-related Self-supervised Learning and Spatial Reserve Pooling for Affective Behaviour Analysis
* Exploring Large-scale Unlabeled Faces to Enhance Facial Expression Recognition
* Facial Expression Recognition Based on Multi-modal Features for Videos in the Wild
* Frame Level Emotion Guided Dynamic Facial Expression Recognition with Emotion Grouping
* Inferring Affective Experience from the Big Picture Metaphor: A Two-dimensional Visual Breadth Model
* Integrating Holistic and Local Information to Estimate Emotional Reaction Intensity
* Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels
* Leveraging TCN and Transformer for effective visual-audio fusion in continuous emotion recognition
* Local Region Perception and Relationship Learning Combined with Feature Fusion for Facial Action Unit Detection
* Multi-modal Emotion Reaction Intensity Estimation with Temporal Augmentation
* Multi-modal Facial Affective Analysis based on Masked Autoencoder
* Multi-modal Information Fusion for Action Unit Detection in the Wild
* Multimodal Continuous Emotion Recognition: A Technical Report for ABAW5
* Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers
* Relational Edge-Node Graph Attention Network for Classification of Micro-Expressions
* Spatial-Temporal Graph-Based AU Relationship Learning for Facial Action Unit Detection
* SPECTRE: Visual Speech-Informed Perceptual 3D Facial Expression Reconstruction from Videos
* t-RAIN: Robust generalization under weather-aliasing label shift attacks
* TempT: Temporal consistency for Test-time adaptation
* Unified Approach to Facial Affect Analysis: the MAE-Face Visual Representation, A
* Unmasking Your Expression: Expression-Conditioned GAN for Masked Face Inpainting
31 for ABAW23

ABAWE22 * *Affective Behavior Analysis In-the-Wild
* ABAW: Learning from Synthetic Data & Multi-task Learning Challenges
* Affective Behavior Analysis Using Action Unit Relation Graph and Multi-task Cross Attention
* Affective Behaviour Analysis Using Pretrained Model with Facial Prior
* Byel: Bootstrap Your Emotion Latent
* Deep Semantic Manipulation of Facial Videos
* Ensemble of Multi-task Learning Networks for Facial Expression Recognition In-the-wild with Learning from Synthetic Data
* Facial Affect Recognition Using Semi-supervised Learning with Adaptive Threshold
* Facial Expression Recognition In-the-wild with Deep Pre-trained Models
* Facial Expression Recognition with Mid-level Representation Enhancement and Graph Embedded Uncertainty Suppressing
* Geometric Pose Affordance: Monocular 3d Human Pose Estimation with Scene Constraints
* MT-emotieffnet for Multi-task Human Affective Behavior Analysis and Learning from Synthetic Data
* Multi-task Learning Framework for Emotion Recognition In-the-wild
* Peri: Part Aware Emotion Recognition in the Wild
* Two-aspect Information Interaction Model for ABAW4 Multi-task Challenge
15 for ABAWE22

Index for "a"


Last update:10-Apr-24 10:46:22
Use price@usc.edu for comments.