Co Author Listing * AffectON: Incorporating Affect Into Dialog Generation
* Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation
* Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations
* Comparison of Phoneme and Viseme Based Acoustic Units for Speech Driven Realistic lip Animation
* Discriminative Analysis of Lip Motion Features for Speaker Identification and Speech-Reading
* Discriminative lip-motion features for biometric speaker identification
* Domain Adaptation for Food Intake Classification With Teacher/Student Learning
* Emotion Dependent Domain Adaptation for Speech Driven Affective Facial Feature Synthesis
* Estimation and Analysis of Facial Animation Parameter Patterns
* Learn2Dance: Learning Statistical Music-to-Dance Mappings for Choreography Synthesis
* Multifaceted Engagement in Social Interaction with a Machine: The JOKER Project
* Multimodal speaker identification with audio-video processing
* Speech-Driven Automatic Facial Expression Synthesis
* Training Socially Engaging Robots: Modeling Backchannel Behaviors with Batch Reinforcement Learning
* Unsupervised dance figure analysis from video for dancing Avatar animation
* Use of Line Spectral Frequencies for Emotion Recognition from Speech
Includes: Erzin, E.[Engin] Erzin, E.
16 for Erzin, E.
Index for "e"