Journals starting with acvr

ACVR13 * *International Workshop on Assistive Computer Vision and Robotics
* Fast and Precise HOG-Adaboost Based Visual Support System Capable to Recognize Pedestrian and Estimate Their Distance, A
* Mobile Visual Assistive Apps: Benchmarks of Vision Algorithm Performance
* Natural User Interfaces in Volume Visualisation Using Microsoft Kinect
* Robust Hand Pose Estimation Algorithm for Hand Rehabilitation, A
* Scene Perception and Recognition for Human-Robot Co-operation
* Tracking Posture and Head Movements of Impaired People During Interactions with Robots
7 for ACVR13

ACVR14 * *International Workshop on Assistive Computer Vision and Robotics
* 3D Glasses as Mobility Aid for Visually Impaired People
* 3D Layout Propagation to Improve Object Recognition in Egocentric Videos
* Associating Locations Between Indoor Journeys from Wearable Cameras
* Benchmark Dataset to Study the Representation of Food Images, A
* Calculating Reachable Workspace Volume for Use in Quantitative Medicine
* Combining Semi-autonomous Navigation with Manned Behaviour in a Cooperative Driving System for Mobile Robotic Telepresence
* Descending Stairs Detection with Low-Power Sensors
* Design and Preliminary Evaluation of a Finger-Mounted Camera and Feedback System to Enable Reading of Printed Text for the Blind, The
* Detection and Modelling of Staircases Using a Wearable Depth Sensor
* Egocentric Object Recognition Leveraging the 3D Shape of the Grasping Hand
* Experimental Analysis of Saliency Detection with Respect to Three Saliency Levels, An
* Eye Blink Detection Using Variance of Motion Vectors
* Face Recognition by 3D Registration for the Visually Impaired Using a RGB-D Sensor
* Fast and Flexible Computer Vision System for Implanted Visual Prostheses, A
* High Dynamic Range Imaging System for the Visually Impaired
* Intelligent Wheelchair to Enable Safe Mobility of the Disabled People with Motor and Cognitive Impairments, An
* Learning Pain from Emotion: Transferred HoT Data Representation for Pain Intensity Estimation
* Mobile Panoramic Vision for Assisting the Blind via Indexing and Localization
* Model-Based Motion Tracking of Infants
* Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage
* Neural Network Fusion of Color, Depth and Location for Object Instance Recognition on a Mobile Robot
* New Application of Smart Walker for Quantitative Analysis of Human Walking, A
* Personal Shopping Assistance and Navigator System for Visually Impaired People
* Polly: Telepresence from a Guide's Shoulder
* Real-Time Emotion Recognition from Natural Bodily Expressions in Child-Robot Interaction
* Recognizing Daily Activities in Realistic Environments Through Depth-Based User Tracking and Hidden Conditional Random Fields for MCI/AD Support
* Road-Crossing Assistance by Traffic Flow Analysis
* Robust Vision-Based Framework for Screen Readers, A
* Scene-Dependent Intention Recognition for Task Communication with Reduced Human-Robot Interaction
* Smart Camera Reconfiguration in Assisted Home Environments for Elderly Care
* Snippet Based Trajectory Statistics Histograms for Assistive Technologies
* System for Assisting the Visually Impaired in Localization and Grasp of Desired Objects, A
* Vision Correcting Displays Based on Inverse Blurring and Aberration Compensation
* Vision-Based SLAM and Moving Objects Tracking for the Perceptual Support of a Smart Walker Platform
* Visual Interaction Including Biometrics Information for a Socially Assistive Robotic Platform
* Visual SLAM System on Mobile Robot Supporting Localization Services to Visually Impaired People, A
* Way to Go! Detecting Open Areas Ahead of a Walking Person
* Wearable RGBD Indoor Navigation System for the Blind
39 for ACVR14

ACVR15 * *International Workshop on Assistive Computer Vision and Robotics
* Accurate Human-Limb Segmentation in RGB-D Images for Intelligent Mobility Assistance Robots
* Automatic Emotion Recognition in Robot-Children Interaction for ASD Treatment
* Deep Learning of Mouth Shapes for Sign Language
* Estimating Body Pose of Infants in Depth Images Using Random Ferns
* Evaluating Real-Time Mirroring of Head Gestures Using Smart Glasses
* Evaluation of Supervised, Novelty-Based and Hybrid Approaches to Fall Detection Using Silmee Accelerometer Data, An
* Fast and Accurate Eye Tracker Using Stroboscopic Differential Lighting, A
* Fine-Grained Product Class Recognition for Assisted Shopping
* Head Nod Detection from a Full 3D Model
* Improving Indoor Mobility of the Visually Impaired with Depth-Based Spatial Sound
* Intuitive Mobility Aid for Visually Impaired People Based on Stereo Vision, An
* Pedestrian Detection via Mixture of CNN Experts and Thresholded Aggregated Channel Features
* Quantifying Levodopa-Induced Dyskinesia Using Depth Camera
* Recognizing Personal Contexts from Egocentric Images
* Saliency Detection Using Quaternion Sparse Reconstruction
* Single-Frame Indexing for 3D Hand Pose Estimation
* Stereo Vision Approach for Cooperative Robotic Movement Therapy, A
* Structured Committee for Food Recognition, A
* Summarizing While Recording: Context-Based Highlight Detection for Egocentric Videos
* Visual Attention-Guided Approach to Monitoring of Medication Dispensing Using Multi-location Feature Saliency Patterns
21 for ACVR15

ACVR16 * *International Workshop on Assistive Computer Vision and Robotics
* 3D Human Posture Approach for Activity Recognition Based on Depth Camera, A
* Automatic Video Captioning via Multi-channel Sequential Encoding
* Brazilian Sign Language Recognition Using Kinect
* Combining Human Body Shape and Pose Estimation for Robust Upper Body Tracking Using a Depth Sensor
* Deep Eye-CU (DECU): Summarization of Patient Motion in the ICU
* Evaluation of Infants with Spinal Muscular Atrophy Type-I Using Convolutional Neural Networks
* Fall Detection Based on Depth-Data in Practice
* Feasibility Analysis of Eye Typing with a Standard Webcam
* Human Interaction Prediction Using Deep Temporal Features
* Human Joint Angle Estimation and Gesture Recognition for Assistive Robotic Vision
* Human-Drone-Interaction: A Case Study to Investigate the Relation Between Autonomy and User Experience
* Integrated Framework for 24-hours Fire Detection, An
* Interactive Multimedia System for Treating Autism Spectrum Disorder, An
* ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind
* Learning and Detecting Objects with a Mobile Robot to Assist Older Adults in Their Homes
* Mobile Mapping and Visualization of Indoor Structures to Simplify Scene Understanding and Location Awareness
* Multi-level Net: A Visual Saliency Prediction Model
* Perfect Accuracy with Human-in-the-Loop Object Detection
* Real-Time Vehicular Vision System to Seamlessly See-Through Cars, A
* Smart Toothbrushes: Inertial Measurement Sensors Fusion with Visual Tracking
* Solving Rendering Issues in Realistic 3D Immersion for Visual Rehabilitation
* Technological Framework to Support Standardized Protocols for the Diagnosis and Assessment of ASD, A
* Using Computer Vision to See
* Validation of Automated Mobility Assessment Using a Single 3D Sensor
* Vision-Based SLAM Navigation for Vibro-Tactile Human-Centered Indoor Guidance
* Visual and Human-Interpretable Feedback for Assisting Physical Activity
27 for ACVR16

ACVR17 * *International Workshop on Assistive Computer Vision and Robotics
* Adaptive Binarization for Weakly Supervised Affordance Segmentation
* BEHAVE: Behavioral Analysis of Visual Events for Assisted Living Scenarios
* Computer Vision Based Approach for Understanding Emotional Involvements in Children with Autism Spectrum Disorders, A
* Computer Vision for the Visually Impaired: the Sound of Vision System
* Depth and Motion Cues with Phosphene Patterns for Prosthetic Vision
* Diabetes60: Inferring Bread Units From Food Images Using Fully Convolutional Neural Networks
* DSD: Depth Structural Descriptor for Edge-Based Assistive Navigation
* Estimating Position Velocity in 3D Space from Monocular Video Sequences Using a Deep Neural Network
* Improved Strategies for HPE Employing Learning-by-Synthesis Approaches
* Inertial-Vision: Cross-Domain Knowledge Transfer for Wearable Sensors
* Innovative Salient Object Detection Using Center-Dark Channel Prior, An
* Long Short-Term Memory Convolutional Neural Network for First-Person Vision Activity Recognition, A
* Mind the Gap: Virtual Shorelines for Blind and Partially Sighted People
* Postural Assessment in Dentistry Based on Multiple Markers Tracking
* Recurrent Assistance: Cross-Dataset Training of LSTMs on Kitchen Tasks
* Robust Human Pose Tracking For Realistic Service Robot Applications
* Seeing Without Sight: An Automatic Cognition System Dedicated to Blind and Visually Impaired People
* Shared Autonomy Approach for Wheelchair Navigation Based on Learned User Preferences, A
* To Veer or Not to Veer: Learning from Experts How to Stay Within the Crosswalk
* Use of Thermal Point Cloud for Thermal Comfort Measurement and Human Pose Estimation in Robotic Monitoring
* Using Technology Developed for Autonomous Cars to Help Navigate Blind People
* Vision-Based Fallen Person Detection for the Elderly
* Vision-Based System for In-Bed Posture Tracking, A
* Wearable Assistive Technology for the Visually Impaired with Door Knob Detection and Real-Time Feedback for Hand-to-Handle Manipulation, A
25 for ACVR17

Index for "a"


Last update:15-Oct-18 10:07:48
Use price@usc.edu for comments.