Journals starting with acvr

ACVR13 * *International Workshop on Assistive Computer Vision and Robotics
* Fast and Precise HOG-Adaboost Based Visual Support System Capable to Recognize Pedestrian and Estimate Their Distance, A
* Mobile Visual Assistive Apps: Benchmarks of Vision Algorithm Performance
* Natural User Interfaces in Volume Visualisation Using Microsoft Kinect
* Robust Hand Pose Estimation Algorithm for Hand Rehabilitation, A
* Scene Perception and Recognition for Human-Robot Co-operation
* Tracking Posture and Head Movements of Impaired People During Interactions with Robots
7 for ACVR13

ACVR14 * *International Workshop on Assistive Computer Vision and Robotics
* 3D Glasses as Mobility Aid for Visually Impaired People
* 3D Layout Propagation to Improve Object Recognition in Egocentric Videos
* Associating Locations Between Indoor Journeys from Wearable Cameras
* Benchmark Dataset to Study the Representation of Food Images, A
* Calculating Reachable Workspace Volume for Use in Quantitative Medicine
* Combining Semi-autonomous Navigation with Manned Behaviour in a Cooperative Driving System for Mobile Robotic Telepresence
* Descending Stairs Detection with Low-Power Sensors
* Design and Preliminary Evaluation of a Finger-Mounted Camera and Feedback System to Enable Reading of Printed Text for the Blind, The
* Detection and Modelling of Staircases Using a Wearable Depth Sensor
* Egocentric Object Recognition Leveraging the 3D Shape of the Grasping Hand
* Experimental Analysis of Saliency Detection with Respect to Three Saliency Levels, An
* Eye Blink Detection Using Variance of Motion Vectors
* Face Recognition by 3D Registration for the Visually Impaired Using a RGB-D Sensor
* Fast and Flexible Computer Vision System for Implanted Visual Prostheses, A
* High Dynamic Range Imaging System for the Visually Impaired
* Intelligent Wheelchair to Enable Safe Mobility of the Disabled People with Motor and Cognitive Impairments, An
* Learning Pain from Emotion: Transferred HoT Data Representation for Pain Intensity Estimation
* Mobile Panoramic Vision for Assisting the Blind via Indexing and Localization
* Model-Based Motion Tracking of Infants
* Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage
* Neural Network Fusion of Color, Depth and Location for Object Instance Recognition on a Mobile Robot
* New Application of Smart Walker for Quantitative Analysis of Human Walking, A
* Personal Shopping Assistance and Navigator System for Visually Impaired People
* Polly: Telepresence from a Guide's Shoulder
* Real-Time Emotion Recognition from Natural Bodily Expressions in Child-Robot Interaction
* Recognizing Daily Activities in Realistic Environments Through Depth-Based User Tracking and Hidden Conditional Random Fields for MCI/AD Support
* Road-Crossing Assistance by Traffic Flow Analysis
* Robust Vision-Based Framework for Screen Readers, A
* Scene-Dependent Intention Recognition for Task Communication with Reduced Human-Robot Interaction
* Smart Camera Reconfiguration in Assisted Home Environments for Elderly Care
* Snippet Based Trajectory Statistics Histograms for Assistive Technologies
* System for Assisting the Visually Impaired in Localization and Grasp of Desired Objects, A
* Vision Correcting Displays Based on Inverse Blurring and Aberration Compensation
* Vision-Based SLAM and Moving Objects Tracking for the Perceptual Support of a Smart Walker Platform
* Visual Interaction Including Biometrics Information for a Socially Assistive Robotic Platform
* Visual SLAM System on Mobile Robot Supporting Localization Services to Visually Impaired People, A
* Way to Go! Detecting Open Areas Ahead of a Walking Person
* Wearable RGBD Indoor Navigation System for the Blind
39 for ACVR14

ACVR15 * *International Workshop on Assistive Computer Vision and Robotics
* Accurate Human-Limb Segmentation in RGB-D Images for Intelligent Mobility Assistance Robots
* Automatic Emotion Recognition in Robot-Children Interaction for ASD Treatment
* Deep Learning of Mouth Shapes for Sign Language
* Estimating Body Pose of Infants in Depth Images Using Random Ferns
* Evaluating Real-Time Mirroring of Head Gestures Using Smart Glasses
* Evaluation of Supervised, Novelty-Based and Hybrid Approaches to Fall Detection Using Silmee Accelerometer Data, An
* Fast and Accurate Eye Tracker Using Stroboscopic Differential Lighting, A
* Fine-Grained Product Class Recognition for Assisted Shopping
* Head Nod Detection from a Full 3D Model
* Improving Indoor Mobility of the Visually Impaired with Depth-Based Spatial Sound
* Intuitive Mobility Aid for Visually Impaired People Based on Stereo Vision, An
* Pedestrian Detection via Mixture of CNN Experts and Thresholded Aggregated Channel Features
* Quantifying Levodopa-Induced Dyskinesia Using Depth Camera
* Recognizing Personal Contexts from Egocentric Images
* Saliency Detection Using Quaternion Sparse Reconstruction
* Single-Frame Indexing for 3D Hand Pose Estimation
* Stereo Vision Approach for Cooperative Robotic Movement Therapy, A
* Structured Committee for Food Recognition, A
* Summarizing While Recording: Context-Based Highlight Detection for Egocentric Videos
* Visual Attention-Guided Approach to Monitoring of Medication Dispensing Using Multi-location Feature Saliency Patterns
21 for ACVR15

ACVR16 * *International Workshop on Assistive Computer Vision and Robotics
* 3D Human Posture Approach for Activity Recognition Based on Depth Camera, A
* Automatic Video Captioning via Multi-channel Sequential Encoding
* Brazilian Sign Language Recognition Using Kinect
* Combining Human Body Shape and Pose Estimation for Robust Upper Body Tracking Using a Depth Sensor
* Deep Eye-CU (DECU): Summarization of Patient Motion in the ICU
* Evaluation of Infants with Spinal Muscular Atrophy Type-I Using Convolutional Neural Networks
* Fall Detection Based on Depth-Data in Practice
* Feasibility Analysis of Eye Typing with a Standard Webcam
* Human Interaction Prediction Using Deep Temporal Features
* Human Joint Angle Estimation and Gesture Recognition for Assistive Robotic Vision
* Human-Drone-Interaction: A Case Study to Investigate the Relation Between Autonomy and User Experience
* Integrated Framework for 24-hours Fire Detection, An
* Interactive Multimedia System for Treating Autism Spectrum Disorder, An
* ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind
* Learning and Detecting Objects with a Mobile Robot to Assist Older Adults in Their Homes
* Mobile Mapping and Visualization of Indoor Structures to Simplify Scene Understanding and Location Awareness
* Multi-level Net: A Visual Saliency Prediction Model
* Perfect Accuracy with Human-in-the-Loop Object Detection
* Real-Time Vehicular Vision System to Seamlessly See-Through Cars, A
* Smart Toothbrushes: Inertial Measurement Sensors Fusion with Visual Tracking
* Solving Rendering Issues in Realistic 3D Immersion for Visual Rehabilitation
* Technological Framework to Support Standardized Protocols for the Diagnosis and Assessment of ASD, A
* Using Computer Vision to See
* Validation of Automated Mobility Assessment Using a Single 3D Sensor
* Vision-Based SLAM Navigation for Vibro-Tactile Human-Centered Indoor Guidance
* Visual and Human-Interpretable Feedback for Assisting Physical Activity
27 for ACVR16

ACVR17 * *International Workshop on Assistive Computer Vision and Robotics
* Adaptive Binarization for Weakly Supervised Affordance Segmentation
* BEHAVE: Behavioral Analysis of Visual Events for Assisted Living Scenarios
* Computer Vision Based Approach for Understanding Emotional Involvements in Children with Autism Spectrum Disorders, A
* Computer Vision for the Visually Impaired: the Sound of Vision System
* Depth and Motion Cues with Phosphene Patterns for Prosthetic Vision
* Diabetes60: Inferring Bread Units From Food Images Using Fully Convolutional Neural Networks
* DSD: Depth Structural Descriptor for Edge-Based Assistive Navigation
* Estimating Position Velocity in 3D Space from Monocular Video Sequences Using a Deep Neural Network
* Improved Strategies for HPE Employing Learning-by-Synthesis Approaches
* Inertial-Vision: Cross-Domain Knowledge Transfer for Wearable Sensors
* Innovative Salient Object Detection Using Center-Dark Channel Prior, An
* Long Short-Term Memory Convolutional Neural Network for First-Person Vision Activity Recognition, A
* Mind the Gap: Virtual Shorelines for Blind and Partially Sighted People
* Postural Assessment in Dentistry Based on Multiple Markers Tracking
* Recurrent Assistance: Cross-Dataset Training of LSTMs on Kitchen Tasks
* Robust Human Pose Tracking For Realistic Service Robot Applications
* Seeing Without Sight: An Automatic Cognition System Dedicated to Blind and Visually Impaired People
* Shared Autonomy Approach for Wheelchair Navigation Based on Learned User Preferences, A
* To Veer or Not to Veer: Learning from Experts How to Stay Within the Crosswalk
* Use of Thermal Point Cloud for Thermal Comfort Measurement and Human Pose Estimation in Robotic Monitoring
* Using Technology Developed for Autonomous Cars to Help Navigate Blind People
* Vision-Based Fallen Person Detection for the Elderly
* Vision-Based System for In-Bed Posture Tracking, A
* Wearable Assistive Technology for the Visually Impaired with Door Knob Detection and Real-Time Feedback for Hand-to-Handle Manipulation, A
25 for ACVR17

ACVR18 * *International Workshop on Assistive Computer Vision and Robotics
* Analysis of the Effect of Sensors for End-to-End Machine Learning Odometry
* ASSIST: Personalized Indoor Navigation via Multimodal Sensors and High-Level Semantic Information
* Chasing Feet in the Wild: A Proposed Egocentric Motion-Aware Gait Assessment Tool
* Comparing Methods for Assessment of Facial Dynamics in Patients with Major Neurocognitive Disorders
* Computer Vision for Medical Infant Motion Analysis: State of the Art and RGB-D Data Set
* Deep Execution Monitor for Robot Assistive Tasks
* Deep Learning for Assistive Computer Vision
* Empirical Study Towards Understanding How Deep Convolutional Nets Recognize Falls, An
* Human-Computer Interaction Approaches for the Assessment and the Practice of the Cognitive Capabilities of Elderly People
* Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments
* RAMCIP Robot: A Personal Robotic Assistant: Demonstration of a Complete Framework
* Recovering 6D Object Pose: A Review and Multi-Modal Analysis
* Vision Augmented Robot Feeding
14 for ACVR18

ACVR19 * *International Workshop on Assistive Computer Vision and Robotics
* Active 3D Classification of Multiple Objects in Cluttered Scenes
* Deep Learning Based Wearable Assistive System for Visually Impaired People
* Deep Learning Performance in the Presence of Significant Occlusions: An Intelligent Household Refrigerator Case
* Dynamic Subtitles: A Multimodal Video Accessibility Enhancement Dedicated to Deaf and Hearing Impaired Users
* Forced Spatial Attention for Driver Foot Activity Classification
* Home-Based Physical Therapy with an Interactive Computer Vision System
* Joint Trajectory and Fatigue Analysis in Wheelchair Users
* Learning to Navigate Robotic Wheelchairs from Demonstration: Is Training in Simulation Viable?
* Object Captioning and Retrieval with Natural Language
* Realistic Face-to-Face Conversation System Based on Deep Neural Networks, A
* Salient Contour-Aware Based Twice Learning Strategy for Saliency Detection
* Social and Scene-Aware Trajectory Prediction in Crowded Spaces
* Street Crossing Aid Using Light-Weight CNNs for the Visually Impaired
* Video Indexing Using Face Appearance and Shot Transition Detection
15 for ACVR19

ACVR21 * *International Workshop on Assistive Computer Vision and Robotics
* Audi-Exchange: AI-Guided Hand-based Actions to Assist Human-Human Interactions for the Blind and the Visually Impaired
* Deep Embeddings-based Place Recognition Robust to Motion Blur
* Efficient Search in a Panoramic Image Database for Long-term Visual Localization
* Exploiting Egocentric Vision on Shopping Cart for Out-Of-Stock Detection in Retail Environments
* FrankMocap: A Monocular 3D Whole-Body Pose Estimation System via Regression and Integration
* HIDA: Towards Holistic Indoor Understanding for the Visually Impaired via Semantic Instance Segmentation with a Wearable Solid-State LiDAR Sensor
* Optical Braille Recognition Using Object Detection Neural Network
* ORB-SLAM with Near-infrared images and Optical Flow data
* ToFNest: Efficient normal estimation for time-of-flight depth cameras
* Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World
* Virtual Touch: Computer Vision Augmented Touch-Free Scene Exploration for the Blind or Visually Impaired
12 for ACVR21

ACVR22 * *International Workshop on Assistive Computer Vision and Robotics
* Augmenting Simulation Data with Sensor Effects for Improved Domain Transfer
* Cross-domain Representation Learning for Clothes Unfolding in Robot-assisted Dressing
* Depth-based In-bed Human Pose Estimation with Synthetic Dataset Generation and Deep Keypoint Estimation
* Detect and Approach: Close-range Navigation Support for People with Blindness and Low Vision
* Fused Multilayer Layer-Cam Fine-grained Spatial Feature Supervision for Surgical Phase Classification Using CNNs
* Interactive Multimodal Robot Dialog Using Pointing Gesture Recognition
* Localisebot: Multi-view 3d Object Localisation with Differentiable Rendering for Robot Grasping
* Matching Multiple Perspectives for Efficient Representation Learning
* Multi-modal Depression Estimation Based on Sub-attentional Fusion
* Multi-Scale Motion-Aware Module for Video Action Recognition
* Representation Learning for Point Clouds with Variational Autoencoders
* Tele-evalnet: A Low-cost, Teleconsultation System for Home Based Rehabilitation of Stroke Survivors Using Multiscale CNN-convlstm Architecture
* Towards the Computational Assessment of the Conservation Status of a Habitat
14 for ACVR22

ACVR23 * *International Workshop on Assistive Computer Vision and Robotics
* Affordance segmentation of hand-occluded containers from exocentric images
* Autonomous mobile robot for automatic out of stock detection in a supermarket
* Continuous Hand Gesture Recognition for Human-Robot Collaborative Assembly
* Enhancing Human-Robot Collaborative Object Search through Human Behavior Observation and Dialog
* FewFaceNet: A Lightweight Few-Shot Learning-based Incremental Face Authentication for Edge Cameras
* From Scarcity to Understanding: Transfer Learning for the Extremely Low Resource Irish Sign Language
* IFPNet: Integrated Feature Pyramid Network with Fusion Factor for Lane Detection
* Is context all you need? Scaling Neural Sign Language Translation to Large Domains of Discourse
* Learnt Contrastive Concept Embeddings for Sign Recognition
* Modeling Visual Impairments with Artificial Neural Networks: a Review
* Multi-Camera 3D Position Estimation using Conditional Random Field
* Multimodal Error Correction with Natural Language and Pointing Gestures
* New Dataset for End-to-End Sign Language Translation: The Greek Elementary School Dataset, A
* Open Scene Understanding: Grounded Situation Recognition Meets Segment Anything for Helping People with Visual Impairments
* Personalized Monitoring in Home Healthcare: An Assistive System for Post Hip Replacement Rehabilitation
* Real-Time Optimisation-Based Path Planning for Visually Impaired People in Dynamic Environments
* Repetition-aware Image Sequence Sampling for Recognizing Repetitive Human Actions
* SHOWMe: Benchmarking Object-agnostic Hand-Object 3D Reconstruction
* Towards estimation of human intent in assistive robotic teleoperation using kinaesthetic and visual feedback
* Vision-Based Treatment Localization with Limited Data: Automated Documentation of Military Emergency Medical Procedures
* VLMAH: Visual-Linguistic Modeling of Action History for Effective Action Anticipation
22 for ACVR23

Index for "a"


Last update:16-Mar-24 21:12:13
Use price@usc.edu for comments.