Index for bowd

Bowd, C.[Christopher] * 2023: One-Vote Veto: Semi-Supervised Learning for Low-Shot Glaucoma Diagnosis

Bowden, A.K. * 2016: Towards biometric identification using 3D epidermal and dermal fingerprints
* 2019: Automatically Determining the Confocal Parameters From OCT B-Scans for Quantification of the Attenuation Coefficients

Bowden, D. * 2016: Predictive ability of anthropomorphic metrics in determining age and sex of children

Bowden, D.C.[Daniel C.] * 2021: Linear Inversion Approach to Measuring the Composition and Directionality of the Seismic Noise Field, A

Bowden, J.B.[Jared B.] * 2010: Boundary Learning by Optimization with Topological Constraints

Bowden, R. * 1997: Real-time Dynamic Deformable Meshes for Volumetric Segmentation and Visualisation
* 1998: Reconstructing 3d Pose and Motion from a Single Camera View
* 2000: Building Temporal Models for Gesture Recognition
* 2000: Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences
* 2001: Adaptive Visual System for Tracking Low Resolution Colour Targets
* 2002: non-linear model of shape and motion for tracking finger spelt American sign language, A
* 2003: real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes, A
* 2003: Special Issue Introduction, Bayesian Analysis
* 2004: boosted classifier tree for hand shape detection, A
* 2004: Linguistic Feature Vector for the Visual Interpretation of Sign Language, A
* 2004: Metric mixtures for mutual information (M3I) tracking
* 2004: Minimal Training, Large Lexicon, Unconstrained Sign Language Recognition
* 2004: Progress in Sign and Gesture Recognition
* 2004: View-based Location and Tracking of Body Parts for Visual Interaction
* 2005: Detection and Tracking of Humans by Probabilistic Body Part Assembly
* 2005: Incremental Modelling of the Posterior Distribution of Objects for Inter and Intra Camera Tracking
* 2005: Learning multi-kernel distance functions using relative comparisons
* 2005: Simultaneous Modeling and Tracking (SMAT) of Feature Sets
* 2005: Towards automated wide area visual surveillance: tracking objects between spatially-separated, uncalibrated views
* 2005: Unsupervised Symbol Grounding and Cognitive Bootstrapping in Cognitive Vision
* 2006: Image template matching using Mutual Information and NP-Windows
* 2006: Learning Distances for Arbitrary Visual Features
* 2006: Learning Wormholes for Sparsely Labelled Clustering
* 2006: N-tier Simultaneous Modelling and Tracking for Arbitrary Warps
* 2006: Real-Time Upper Body Detection and 3D Pose Estimation in Monoscopic Images
* 2006: Tracking Objects Across Cameras by Incrementally Learning Inter-camera Colour Calibration and Patterns of Activity
* 2006: Unifying Framework for Mutual Information Methods for Use in Non-linear Optimisation, A
* 2006: Viewpoint invariant exemplar-based 3D human tracking
* 2007: Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers
* 2007: Large Lexicon Detection of Sign Language
* 2007: Linear Predictors for Fast Simultaneous Modeling and Tracking
* 2007: Multi Person Tracking Within Crowded Scenes
* 2008: Estimating the Joint Statistics of Images Using Nonparametric Windows with Application to Registration Using Mutual Information
* 2008: Generative Model for Motion Synthesis and Blending Using Probability Density Estimation, A
* 2008: Incremental, scalable tracking of objects inter camera
* 2008: Mutual Information for Lucas-Kanade Tracking (MILK): An Inverse Compositional Formulation
* 2008: Online Learning and Partitioning of Linear Displacement Predictors for Tracking
* 2008: Scale Invariant Action Recognition Using Compound Features Mined from Dense Spatio-temporal Corners
* 2008: Spatio-temporal feature recognition using randomised Ferns
* 2009: Accurate fusion of robot, camera and wireless sensors for surveillance applications
* 2009: Action recognition using Randomised Ferns
* 2009: effect of pose on Facial Expression Recognition, The
* 2009: Fast realistic multi-action recognition using mined dense spatio-temporal features
* 2009: Feature selection of facial displays for detection of non verbal communication in natural conversation
* 2009: Learning signs from subtitles: A weakly supervised approach to sign language recognition
* 2009: Online learning of robust facial feature trackers
* 2009: Problem solving through imitation
* 2009: Real-time motion control using pose space probability density estimation
* 2009: Robust Facial Feature Tracking Using Selected Multi-resolution Linear Predictors
* 2010: Affordance Mining: Forming Perception through Action
* 2010: Facial Expression Recognition Using Spatiotemporal Boosted Discriminatory Classifiers
* 2010: Learning Pre-attentive Driving Behaviour from Holistic Visual Features
* 2010: Multi-view Pose and Facial Expression Recognition
* 2010: Social Interactive Human Video Synthesis
* 2011: Action Recognition Using Mined Hierarchical Compound Features
* 2011: Capturing the relative distribution of features for action recognition
* 2011: Cultural factors in the regression of non-verbal communication perception
* 2011: Driving me around the bend: Learning to drive from visual gist
* 2011: iGroup: Weakly supervised image and video grouping
* 2011: Kinecting the dots: Particle based scene flow from depth sensors
* 2011: Learning Sequential Patterns for Lipreading
* 2011: Learning temporal signatures for Lip Reading
* 2011: Linear Regression and Adaptive Appearance Models for Fast Simultaneous Modelling and Tracking
* 2011: Local binary patterns for multi-view facial expression recognition
* 2011: MIMiC: Multimodal Interactive Motion Controller
* 2011: Push and Pull: Iterative grouping of media
* 2011: Putting the pieces together: Connected Poselets for human pose estimation
* 2011: Reading the signs: A video based sign dictionary
* 2011: Robust Facial Feature Tracking Using Shape-Constrained Multiresolution-Selected Linear Predictors
* 2011: Spelling it out: Real-time ASL fingerspelling recognition
* 2011: There Is More Than One Way to Get Out of a Car: Automatic Mode Finding for Action Recognition in the Wild
* 2011: Visualisation and prediction of conversation interest through mined social signals
* 2012: Fuzzy encoding for image classification using Gustafson-Kessel algorithm
* 2012: Go with the Flow: Hand Trajectories in 3D via Clustered Scene Flow
* 2012: Meeting in the Middle: A top-down and bottom-up approach to detect pedestrians
* 2012: Picture Is Worth a Thousand Tags: Automatic Web Based Image Tag Expansion, A
* 2012: Sign Language Recognition using Sequential Pattern Trees
* 2012: Tracking the Untrackable: How to Track When Your Object Is Featureless
* 2013: Accurate static pose estimation combining direct regression and geodesic extrema
* 2013: Autonomous navigation and sign detector learning
* 2013: Hollywood 3D: Recognizing Actions in 3D Natural Scenes
* 2013: Improving recognition and identification of facial areas involved in Non-Verbal Communication by feature selection
* 2013: Long-Term Tracking through Failure Cases
* 2013: May the force be with you: Force-aligned signwriting for automatic subunit annotation of corpora
* 2013: Multi-touchless: Real-time fingertip detection and tracking using geodesic maxima
* 2013: Non-linear predictors for facial feature tracking across pose and expression
* 2013: Visual Object Tracking VOT2013 Challenge Results, The
* 2014: 2D or Not 2D: Bridging the Gap Between Tracking and Structure from Motion
* 2014: Capturing relative motion and finding modes for action recognition in the wild
* 2014: Data Mining for Action Recognition
* 2014: Guest Editorial: Tracking, Detection and Segmentation
* 2014: Natural Action Recognition Using Invariant 3D Motion Encoding
* 2014: Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition
* 2014: Scene Flow Estimation using Intelligent Cost Functions
* 2014: Scene Particles: Unregularized Particle-Based Scene Flow Estimation
* 2014: Sign Spotting Using Hierarchical Sequential Patterns with Temporal Intervals
* 2014: Visual Object Tracking VOT2014 Challenge Results, The
* 2015: Combining discriminative and model based approaches for hand pose estimation
* 2015: Deep Learning of Mouth Shapes for Sign Language
* 2015: Dense Rigid Reconstruction from Unstructured Discontinuous Video
* 2015: Exploiting High Level Scene Cues in Stereo Reconstruction
* 2015: Exploring Causal Relationships in Visual Object Tracking
* 2015: Geometric Mining: Scaling Geometric Hashing to Large Datasets
* 2016: Deep Hand: How to Train a CNN on 1 Million Hand Images When Your Data is Continuous and Weakly Labelled
* 2016: Deep Sign: Hybrid CNN-HMM for Continuous Sign Language Recognition
* 2016: Direct-from-Video: Unsupervised NRSfM
* 2016: Next-Best Stereo: Extending Next-Best View Optimisation For Collaborative Sensors
* 2016: Texture-Independent Long-Term Tracking Using Virtual Corners
* 2016: Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results, The
* 2016: Using Convolutional 3D Neural Networks for User-independent continuous gesture recognition
* 2016: Visual Object Tracking VOT2016 Challenge Results, The
* 2017: Guided optimisation through classification and regression for hand pose estimation
* 2017: Hollywood 3D: What are the Best 3D Features for Action Recognition?
* 2017: Image and video mining through online learning
* 2017: Particle Filter Based Probabilistic Forced Alignment for Continuous Gesture Recognition
* 2017: Stereo reconstruction using top-down cues
* 2017: SubUNets: End-to-End Hand Shape and Continuous Sign Language Recognition
* 2017: Taking the Scenic Route to 3D: Optimising Reconstruction from Moving Cameras
* 2017: TMAGIC: A Model-Free 3D Tracker
* 2017: Visual Object Tracking VOT2017 Challenge Results, The
* 2018: Deep Sign: Enabling Robust Statistical Continuous Sign Language Recognition via Hybrid CNN-HMMs
* 2018: Localisation via Deep Imagination: Learn the Features Not the Map
* 2018: Neural Sign Language Translation
* 2018: Sixth Visual Object Tracking VOT2018 Challenge Results, The
* 2019: HARD-PnP: PnP Optimization Using a Hybrid Approximate Representation
* 2019: Scale-Adaptive Neural Dense Features: Learning via Hierarchical Context Aggregation
* 2020: DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised Representation Learning
* 2020: Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement
* 2020: NestedVAE: Isolating Common Factors via Weak Supervision
* 2020: Progressive Transformers for End-to-end Sign Language Production
* 2020: Same Features, Different Day: Weakly Supervised Feature Learning for Seasonal Invariance
* 2020: SeDAR: Reading Floorplans Like a Human: Using Deep Learning to Enable Human-Inspired Localisation
* 2020: Sign Language Transformers: Joint End-to-End Sign Language Recognition and Translation
* 2020: Text2Sign: Towards Sign Language Production Using Neural Machine Translation and Generative Adversarial Networks
* 2020: Weakly Supervised Learning with Multi-Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos
* 2021: Anonysign: Novel Human Appearance Synthesis for Sign Language Video Anonymisation
* 2021: Content4All Open Research Sign Language Translation Datasets
* 2021: Continuous 3D Multi-Channel Sign Language Production via Progressive Transformers and Mixture Density Networks
* 2021: Evaluating the Immediate Applicability of Pose Estimation for Sign Language Recognition
* 2021: Human Pose Manipulation and Novel View Synthesis using Differentiable Rendering
* 2021: Looking for the Signs: Identifying Isolated Sign Instances in Continuous Video Footage
* 2021: Mixed SIGNals: Sign Language Production via a Mixture of Motion Primitives
* 2021: Shadow-Mapping for Unsupervised Neural Causal Discovery
* 2021: Skeletor: Skeletal Transformers for Robust Body-Pose Estimation
* 2021: Survey of Deep Learning Applications to Autonomous Vehicle Control, A
* 2021: VDSM: Unsupervised Video Disentanglement with State-Space Modeling and Deep Mixtures of Experts
* 2022: Hierarchical I3d for Sign Spotting
* 2022: Medusa: Universal Feature Learning via Attentional Multitasking
* 2022: Pedestrian next to the Lamppost Adaptive Object Graphs for Better Instantaneous Mapping, The
* 2022: Signing at Scale: Learning to Co-Articulate Signs for Large-Scale Photo-Realistic Sign Language Production
* 2023: Denoising Diffusion for 3D Hand Pose Estimation from Images
* 2023: Is context all you need? Scaling Neural Sign Language Translation to Large Domains of Discourse
* 2023: Kick Back & Relax: Learning to Reconstruct the World by Watching SlowTV
* 2023: Learning Adaptive Neighborhoods for Graph Neural Networks
* 2023: Learnt Contrastive Concept Embeddings for Sign Recognition
* 2023: Monocular Depth Estimation Challenge, The
* 2023: Second Monocular Depth Estimation Challenge, The
Includes: Bowden, R. Bowden, R.[Richard]
157 for Bowden, R.

Bowdish, J.[Joshua] * 2012: Bayesian geometric modeling of indoor scenes
* 2013: Understanding Bayesian Rooms Using Composite 3D Object Models

Index for "b"


Last update:27-Apr-24 12:11:53
Use price@usc.edu for comments.