Journals starting with vism

Vismod * *Massachusetts Institute of Technology, Media Lab
* 3D Modeling of Human Lip Motions
* 3D Structure from 2D Motion
* Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour
* Action Recognition Using Probabilistic Parsing
* Action-Reaction Learning: Analysis and Synthesis of Human Behaviour
* Affective Computing for HCI
* Affective Intelligence: The Missing Link
* Affective Objects
* Affective Pattern Classification
* Affective Wearables
* Algebraic Descriptions of Relative Affine Structure: Connections to Euclidean, Affine and Projective Structure
* Analysis, Interpretation and Synthesis of Facial Expressions
* Analyzing and Recognizing Walking Figures in XYT
* Analyzing Gait with Spatiotemporal Surfaces
* Appearance-Based Motion Recognition of Human Actions
* Application of Stochastic Grammars to Understanding Action
* Apply Mid-Level Vision Techniques for Video Data Compression and Manipulation
* Approximate World Models: Incorporating Qualitative and Linguistic Information into Vision Systems
* Automated Posture Analysis for detecting Learner's Interest Level
* Automatic Facial Action Analysis
* Automatic System for Model-Based Coding of Faces, An
* Bayesian Computer Vision System for Modeling Human Interactions, A
* Beyond Eigenfaces: Probabilistic Matching for Face Recognition
* Building HAL: Computers that sense, recognize, and respond to human emotion
* Cluster-Based Probability Model Applied to Image Restoration and Compression
* Computers that Recognize and Respond to User Emotion: Theoretical and Practical Implications
* Computing Optical Flow Distributions Using Spatio-Temporal Filters
* Conjoint Probabilistic Subband Modeling
* Content Access for Image/Video Coding: The Fourth Criterion
* Cooperative Robust Estimation Using Layers of Support
* Coupled Hidden Markov Models for Complex Action Recognition
* Detecting and Segmenting Periodic Motion
* Digital Processing of Affective Signals
* Disparity-Space Images and Large Occlusion Stereo
* Distributed Analysis and Representation of Visual Motion
* Distributed Representations of Image Velocity
* Divide and Conquer: Using Approximate World Models to Control View-Based Algorithms
* Dynaman: Recursive Modeling of Human Motion
* Dynamic Models of Human Motion
* DyPERS: Dynamic Personal Enhanced Reality System
* Eigenbehaviors: Identifying Structure in Routine
* Eigenheads for Reconstruction
* Exaggerated Consensus in Lossless Image Compression
* Expression Glasses: A Wearable Device for Facial Expression Recognition
* Face Recognition for Smart Environments
* Face Recognition Using View-Based and Modular Eigenspaces
* Facial Expression Recognition Using a Dynamic Model and Motion Energy
* Fast Constraint Propagation on Specialized Allen Networks and its Application to Action Recognition and Control
* Fast Lighting Independent Background Subtraction
* Finding Periodicity in Space and Time
* Finding Similar Patterns in Large Image Databases
* Finite-Element Framework for Correspondence and Shape Description, A
* Framework for Recognizing Multi-Agent Action from Visual Evidence, A
* Frustratring the User on Purpose: A Step Toward Building an Affective Computer
* Fully Automatic Upper Facial Action Recognition
* Gibbs Random Fields, Cooccurrences, and Texture Modeling
* Gibbs Random Fields: Temperature and Parameter Analysis
* Human Action Detection Using PNF Propagation of Temporal Constraints
* Incorporating Intensity Edges in the Recovery of Occlusion Regions
* Indoor-Outdoor Image Classification
* Intelligent Studios: Modeling Space and Action to Control TV Cameras
* Interactive Learning with a Society of Models
* Inverse Hollywood Problem: From video to scripts and storyboards via causal analysis, The
* KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment, The
* Large Occlusion Stereo
* Layered Representation for Motion Analysis
* Learning Visual Behavior for Gesture Analysis
* Luxomatic: Computer Vision for Puppeteering
* M-Lattice: A Novel Non-Linear Dynamical System and Its Application to Halftoning
* Markov/Gibbs Image Modeling: Temperature and Texture
* Markov/Gibbs Texture Modeling: Aura Matrices and Temperature Effects
* Mixtures of Eigen Features for Real-Time Structure from Texture
* Modal Matching for Correspondence and Recognition
* Modal Matching: A Method for Describing, Comparing, and Manipulating Digital Signals
* Model of Visual Motion Sensing
* Modeling and Prediction of Human Behavior
* Modeling Spatial and Temporal Textures
* Motion Field Histograms for Robust Modeling of Facial Expressions
* Movement, Activity, and Action: The Role of Knowledge in the Perception of Motion
* Multi-Scale Image Transforms
* Multi-sensor Data Fusion Using the Influence Model
* Multimodal Person Recognition Using Unconstrained Audio and Video
* New Miscibility Measure Explains the Behavior of Grayscale Texture Synthesized By Gibbs Random Fields
* Non-Separable Extensions of Quadrature Mirror Filters to Multiple Dimensions
* Nonlinear PHMMs for the Interpretation of Parameterized Gesture
* Novel Cluster-Based Probability Model for Texture Synthesis, Classification, and Compression
* Object Recognition and Categorization Using Modal Matching
* OBVIUS: Object-Based Vision and Image Understanding System
* On Modal Modeling for Medical Images: Underconstrained Shape Description and Data Compression
* On the Structure of Aura and Co-Occurrence Matrices for the Gibbs Texture Model
* On the Use of Nulling Filters to Separate Transparent Motions
* On Training Gaussian Radial Basis Functions for Image Coding
* Orientation-Sensitive Image Processing with M-Lattice: A Novel Non-Linear Dynamical System
* Orthogonal Pyramid Transforms For Image Coding
* Orthogonal Sub-Band Image Transforms
* Parameterized Structure from Motion for 3D Adaptive Feedback Tracking of Faces
* Parametric Hidden Markov Models for Gesture Recognition
* Parsing Multi-Agent Interactions
* Perceptually Organized EM: A Framework for Motion Segmentation That Combines Information about Form and Motion
* Periodicity, Directionality, and Randomness: Wold Features for Perceptual Pattern Recognition
* Pfinder: Real-Time Tracking of the Human Body
* Photobook: Tools for Content-Based Manipulation of Image Databases
* Physically-Based Combinations of Views: Representing Rigid and Nonrigid Motion
* PNF Calculus and the Detection of Actions Described by Temporal Intervals
* Probabilistic Object Recognition and Localization
* Probabilistic Parsing in Action Recognition
* Random Field Texture Coding
* Real Time Closed World Tracking
* Real Time Tracking and Modeling of Faces: An EKF-based Analysis by Synthesis Approach
* Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
* Real-Time Head Nod and Shake Detector, A
* Real-Time Recognition of Activity Using Temporal Templates
* Real-Time Recognition with the Entire Brodatz Texture Database
* Real-time, fully automatic upper facial feature tracking
* Realtime Online Adaptive Gesture Recognition
* Recognition and Interpretation of Parametric Gesture
* Recognition of Human Body Motion Using Phase Space Constraints
* Recognition without Correspondence using Multidimensional Receptive Field Histograms
* Recognizing User's Context from Wearable Sensors: Baseline System
* Recovering the Temporal Structure of Natural Gesture
* Recursive Estimation of Motion, Structure, and Focal Length
* Representation and Recognition of Action in Interactive Spaces
* Representation and Recognition of Action Using Temporal Templates, The
* Representation and Visual Recognition of Complex, Multi-Agent Actions Using Belief Networks
* Representing Moving Images with Layers
* Representing Moving Images with Layers
* Restoration and Enhancement of Fingerprint Images Using M-Lattice: A Novel Non-Linear Dynamical System
* Restoration and Enhancement of Fingerprint Images Using M-Lattice: A Novel Non-Linear Dynamical System
* Robust Estimation of Multiple Models in the Structure from Motion Domain
* Sensei: A Real-time Recognition, Feedback, and Training System for T'ai Chi Gestures
* Sensing and Measurement of Frustration with Computers, The
* Sensing and Modeling Human Networks
* Separation of Transparent Motion into Layers Using Velocity-Tuned Mechanisms
* Shape Analysis of Brain Structures Using Physical and Experimental Modes
* Shiftable Multi-Scale Transforms: or What's Wrong with Orthonormal Wavelets
* Space-Time Gestures
* Spatio-Temporal Segmentation of Video Data
* State Based Technique for the Summarization and Recognition of Gesture, A
* Structured Patterns From Random Fields
* Subband Transforms
* Texture Orientation for Sorting Photos at a Glance
* ThingWorld Modeling System, The
* Three-Dimensional Model of Human Lip Motion, A
* Three-Dimensional Model of Human Lip Motions Trained from Video, A
* Toward computers that recognize and respond to user emotion
* Tracking Facial Motion
* Tracking Using a Local Closed-World Assumption: Tracking in the Football Domain
* Understanding Expressive Action
* Understanding Manipulation in Video
* Understanding Purposeful Human Motion
* Using Configuration States for the Representation and Recognition of Gesture
* View-Based and Modular Eigenspaces for Face Recognition
* Virtual Bellows: Constructing High Quality Stills from Video
* Vision System for Observing and Extracting Facial Action Parameters, A
* Vision Texture for Annotation
* Visual Tracking Using Closed-Worlds
* Visually Controlled Graphics
* Wearable Computing Based American Sign Language Recognizer, A
159 for Vismod

Index for "v"


Last update:16-Aug-18 18:56:26
Use price@usc.edu for comments.