Stikic, M.[Maja],
Larlus, D.[Diane],
Ebert, S.[Sandra],
Schiele, B.[Bernt],
Weakly Supervised Recognition of Daily Life Activities with Wearable
Sensors,
PAMI(33), No. 12, December 2011, pp. 2521-2537.
IEEE DOI
1110
Learning, reduce annotation effort.
BibRef
Jiang, Y.F.[Yi-Fei],
Li, D.[Du],
Lv, Q.[Qin],
Thinking Fast and Slow: An Approach to Energy-Efficient Human Activity
Recognition on Mobile Devices,
AIMag(34), No. 2, Summer 2013, pp. 48-66.
WWW Link.
1309
BibRef
Guo, D.Y.[Dong-Yan],
Tang, J.H.[Jin-Hui],
Cui, Y.[Ying],
Ding, J.[Jundi],
Zhao, C.X.[Chun-Xia],
Saliency-based content-aware lifestyle image mosaics,
JVCIR(26), No. 1, 2015, pp. 192-199.
Elsevier DOI
1502
Image mosaic
BibRef
Betancourt, A.[Alejandro],
Morerio, P.,
Regazzoni, C.S.,
Rauterberg, M.,
The Evolution of First Person Vision Methods: A Survey,
CirSysVideo(25), No. 5, May 2015, pp. 744-760.
IEEE DOI
1505
Survey, Egocentric. Cameras
BibRef
Yan, Y.[Yan],
Ricci, E.[Elisa],
Liu, G.[Gaowen],
Sebe, N.[Nicu],
Egocentric Daily Activity Recognition via Multitask Clustering,
IP(24), No. 10, October 2015, pp. 2984-2995.
IEEE DOI
1507
BibRef
Earlier:
Recognizing Daily Activities from First-Person Videos with Multi-task
Clustering,
ACCV14(IV: 522-537).
Springer DOI
1504
Algorithm design and analysis
See also Multitask Linear Discriminant Analysis for View Invariant Action Recognition.
BibRef
Alletto, S.[Stefano],
Serra, G.[Giuseppe],
Calderara, S.[Simone],
Cucchiara, R.[Rita],
Understanding social relationships in egocentric vision,
PR(48), No. 12, 2015, pp. 4082-4096.
Elsevier DOI
1509
Egocentric vision
BibRef
Alletto, S.[Stefano],
Serra, G.[Giuseppe],
Cucchiara, R.[Rita],
Video registration in egocentric vision under day and night
illumination changes,
CVIU(157), No. 1, 2017, pp. 274-283.
Elsevier DOI
1704
Video registration
BibRef
Lu, C.W.[Ce-Wu],
Liao, R.J.[Ren-Jie],
Jia, J.Y.[Jia-Ya],
Personal object discovery in first-person videos,
IP(24), No. 12, December 2015, pp. 5789-5799.
IEEE DOI
1512
cameras
BibRef
Buso, V.[Vincent],
González-Díaz, I.[Iván],
Benois-Pineau, J.[Jenny],
Goal-oriented top-down probabilistic visual attention model for
recognition of manipulated objects in egocentric videos,
SP:IC(39, Part B), No. 1, 2015, pp. 418-431.
Elsevier DOI
1512
BibRef
And:
Object recognition with top-down visual attention modeling for
behavioral studies,
ICIP15(4431-4435)
IEEE DOI
1512
Saliency maps.
Egocentric Vision
BibRef
González-Díaz, I.[Iván],
Buso, V.[Vincent],
Benois-Pineau, J.[Jenny],
Perceptual modeling in the problem of active object recognition in
visual scenes,
PR(56), No. 1, 2016, pp. 129-141.
Elsevier DOI
1604
Perceptual modeling
BibRef
Karaman, S.[Svebor],
Benois-Pineau, J.[Jenny],
Megret, R.[Remi],
Dovgalecs, V.[Vladislavs],
Dartigues, J.F.[Jean-Francois],
Gaestel, Y.[Yann],
Human Daily Activities Indexing in Videos from Wearable Cameras for
Monitoring of Patients with Dementia Diseases,
ICPR10(4113-4116).
IEEE DOI
1008
BibRef
Pinquier, J.[Julien],
Karaman, S.[Svebor],
Letoupin, L.[Laetitia],
Guyot, P.[Patrice],
Megret, R.[Remi],
Benois-Pineau, J.[Jenny],
Gaestel, Y.[Yann],
Dartigues, J.F.[Jean-Francois],
Strategies for multiple feature fusion with Hierarchical HMM:
Application to activity recognition from wearable audiovisual sensors,
ICPR12(3192-3195).
WWW Link.
1302
BibRef
Boujut, H.[Hugo],
Benois-Pineau, J.[Jenny],
Megret, R.[Remi],
Fusion of Multiple Visual Cues for Visual Saliency Extraction from
Wearable Camera Settings with Strong Motion,
Concept12(III: 436-445).
Springer DOI
1210
BibRef
Stoian, A.[Andrei],
Ferecatu, M.[Marin],
Benois-Pineau, J.[Jenny],
Crucianu, M.[Michel],
Fast Action Localization in Large-Scale Video Archives,
CirSysVideo(26), No. 10, October 2016, pp. 1917-1930.
IEEE DOI
1610
BibRef
Earlier:
Scalable action localization with kernel-space hashing,
ICIP15(257-261)
IEEE DOI
1512
Histograms.
Action localization
BibRef
Rituerto, A.[Alejandro],
Modeling the environment with egocentric vision systems,
ELCVIA(14), No. 3, 2015, pp. xx-yy.
DOI Link
1601
Thesis summary.
BibRef
Rituerto, A.[Alejandro],
Murillo, A.C.[Ana C.],
Guerrero, J.J.[José J.],
3D Layout Propagation to Improve Object Recognition in Egocentric
Videos,
ACVR14(839-852).
Springer DOI
1504
BibRef
Hong, J.H.[Jin-Hyuk],
Ramos, J.,
Dey, A.K.,
Toward Personalized Activity Recognition Systems With a
Semipopulation Approach,
HMS(46), No. 1, February 2016, pp. 101-112.
IEEE DOI
1602
Bayes methods
BibRef
Damen, D.[Dima],
Leelasawassuk, T.[Teesid],
Mayol-Cuevas, W.W.[Walterio W.],
You-Do, I-Learn: Egocentric unsupervised discovery of objects and
their modes of interaction towards video-based guidance,
CVIU(149), No. 1, 2016, pp. 98-112.
Elsevier DOI
1606
Video guidance
BibRef
Lagunes-Fortiz, M.[Miguel],
Damen, D.[Dima],
Mayol-Cuevas, W.W.[Walterio W.],
Instance-level Object Recognition Using Deep Temporal Coherence,
ISVC18(274-285).
Springer DOI
1811
BibRef
Damen, D.[Dima],
Haines, O.[Osian],
Leelasawassuk, T.[Teesid],
Calway, A.D.[Andrew D.],
Mayol-Cuevas, W.W.[Walterio W.],
Multi-User Egocentric Online System for Unsupervised Assistance on
Object Usage,
ACVR14(481-492).
Springer DOI
1504
BibRef
And: A1, A3, A2, A4, A5:
You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of
Interaction from Multi-User Egocentric Video,
BMVC14(xx-yy).
HTML Version.
1410
BibRef
Abebe, G.[Girmaw],
Cavallaro, A.[Andrea],
Parra, X.[Xavier],
Robust multi-dimensional motion features for first-person vision
activity recognition,
CVIU(149), No. 1, 2016, pp. 229-248.
Elsevier DOI
1606
Human activity recognition
BibRef
Zhu, Z.,
Satizábal, H.F.,
Blanke, U.,
Perez-Uribe, A.,
Tröster, G.,
Naturalistic Recognition of Activities and Mood Using Wearable
Electronics,
AffCom(7), No. 3, July 2016, pp. 272-285.
IEEE DOI
1609
Acceleration
BibRef
Gutierrez-Gomez, D.[Daniel],
Guerrero, J.J.,
True scaled 6 DoF egocentric localisation with monocular wearable
systems,
IVC(52), No. 1, 2016, pp. 178-194.
Elsevier DOI
1609
Monocular SLAM
BibRef
Singh, S.[Suriya],
Arora, C.[Chetan],
Jawahar, C.V.,
Trajectory aligned features for first person action recognition,
PR(62), No. 1, 2017, pp. 45-55.
Elsevier DOI
1705
BibRef
Earlier:
First Person Action Recognition Using Deep Learned Descriptors,
CVPR16(2620-2628)
IEEE DOI
1612
Action and activity recognition
BibRef
Vaca-Castano, G.[Gonzalo],
Das, S.[Samarjit],
Sousa, J.P.[Joao P.],
da Vitoria Lobo, N.[Niels],
Shah, M.[Mubarak],
Improved scene identification and object detection on egocentric
vision of daily activities,
CVIU(156), No. 1, 2017, pp. 92-103.
Elsevier DOI
1702
Scene classification
BibRef
Vaca-Castano, G.[Gonzalo],
da Vitoria Lobo, N.[Niels],
Shah, M.[Mubarak],
Holistic object detection and image understanding,
CVIU(181), 2019, pp. 1-13.
Elsevier DOI
1903
Image representation, Object detection
BibRef
Dimiccoli, M.[Mariella],
Bolaños, M.[Marc],
Talavera, E.[Estefania],
Aghaei, M.[Maedeh],
Nikolov, S.G.[Stavri G.],
Radeva, P.[Petia],
SR-clustering: Semantic regularized clustering for egocentric photo
streams segmentation,
CVIU(155), No. 1, 2017, pp. 55-69.
Elsevier DOI
1702
Temporal segmentation
BibRef
del Molino, A.G.,
Tan, C.,
Lim, J.H.,
Tan, A.H.,
Summarization of Egocentric Videos: A Comprehensive Survey,
HMS(47), No. 1, February 2017, pp. 65-76.
IEEE DOI
1702
Survey, Egocentric Analysis. image segmentation
BibRef
Brutti, A.[Alessio],
Cavallaro, A.[Andrea],
Online Cross-Modal Adaptation for Audio-Visual Person Identification
With Wearable Cameras,
HMS(47), No. 1, February 2017, pp. 40-51.
IEEE DOI
1702
audio-visual systems
BibRef
Brutti, A.[Alessio],
Cavallaro, A.[Andrea],
Unsupervised Cross-Modal Deep-Model Adaptation for Audio-Visual
Re-identification with Wearable Cameras,
CVAVM17(438-445)
IEEE DOI
1802
Adaptation models, Cameras, Feature extraction, Labeling,
Speech recognition, Training, Visualization
BibRef
Timmons, A.C.,
Chaspari, T.,
Han, S.C.,
Perrone, L.,
Narayanan, S.S.,
Margolin, G.,
Using Multimodal Wearable Technology to Detect Conflict among Couples,
Computer(50), No. 3, March 2017, pp. 50-59.
IEEE DOI
1704
Behavioral sciences
BibRef
Conti, F.[Francesco],
Palossi, D.[Daniele],
Andri, R.[Renzo],
Magno, M.[Michele],
Benini, L.[Luca],
Accelerated Visual Context Classification on a Low-Power Smartwatch,
HMS(47), No. 1, February 2017, pp. 19-30.
IEEE DOI
1702
cameras
BibRef
Ortis, A.[Alessandro],
Farinella, G.M.[Giovanni M.],
d'Amico, V.[Valeria],
Addesso, L.[Luca],
Torrisi, G.[Giovanni],
Battiato, S.[Sebastiano],
Organizing egocentric videos of daily living activities,
PR(72), No. 1, 2017, pp. 207-218.
Elsevier DOI
1708
First person vision
BibRef
Wannenburg, J.,
Malekian, R.,
Physical Activity Recognition From Smartphone Accelerometer Data for
User Context Awareness Sensing,
SMCS(47), No. 12, December 2017, pp. 3142-3149.
IEEE DOI
1712
Accelerometers, Feature extraction, Hidden Markov models,
Legged locomotion, Machine learning algorithms, Sensors, Servers,
smartphone
BibRef
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Battiato, S.[Sebastiano],
Recognizing Personal Locations From Egocentric Videos,
HMS(47), No. 1, February 2017, pp. 6-18.
IEEE DOI
1702
BibRef
Earlier:
Temporal Segmentation of Egocentric Videos to Highlight Personal
Locations of Interest,
Egocentric16(I: 474-489).
Springer DOI
1611
BibRef
Earlier:
Recognizing Personal Contexts from Egocentric Images,
ACVR15(393-401)
IEEE DOI
1602
Biomedical monitoring
BibRef
Kazantzidis, I.[Ioannis],
Florez-Revuelta, F.[Francisco],
Dequidt, M.[Mickael],
Hill, N.[Natasha],
Nebel, J.C.[Jean-Christophe],
Vide-omics: A genomics-inspired paradigm for video analysis,
CVIU(166), No. 1, 2018, pp. 28-40.
Elsevier DOI
1712
Genomics: where variability is the expected norm.
BibRef
Shrestha, P.[Prakash],
Saxena, N.[Nitesh],
An Offensive and Defensive Exposition of Wearable Computing,
Surveys(50), No. 6, January 2018, pp. Article No 92.
DOI Link
1801
Wearable computing is rapidly getting deployed in many commercial,
medical, and personal domains of day-to-day life.
BibRef
Bolaños, M.[Marc],
Peris, Á.[Álvaro],
Casacuberta, F.[Francisco],
Soler, S.[Sergi],
Radeva, P.[Petia],
Egocentric video description based on temporally-linked sequences,
JVCIR(50), 2018, pp. 205-216.
Elsevier DOI
1802
Egocentric vision, Video description, Deep learning, Multi-modal learning
BibRef
Meditskos, G.[Georgios],
Plans, P.M.[Pierre-Marie],
Stavropoulos, T.G.[Thanos G.],
Benois-Pineau, J.[Jenny],
Buso, V.[Vincent],
Kompatsiaris, I.[Ioannis],
Multi-modal activity recognition from egocentric vision, semantic
enrichment and lifelogging applications for the care of dementia,
JVCIR(51), 2018, pp. 169-190.
Elsevier DOI
1802
Instrumental activity recognition, Egocentric camera,
Mechanical measurements, Visual cues, Ontologies, Semantic knowledge graphs
BibRef
Yao, R.[Rui],
Lin, G.S.[Guo-Sheng],
Shi, Q.F.[Qin-Feng],
Ranasinghe, D.C.[Damith C.],
Efficient dense labelling of human activity sequences from wearables
using fully convolutional networks,
PR(78), 2018, pp. 252-266.
Elsevier DOI
1804
Human activity recognition,
Time series sequence classification, Fully convolutional networks
BibRef
Noor, S.[Shaheena],
Uddin, V.[Vali],
Using context from inside-out vision for improved activity recognition,
IET-CV(12), No. 3, April 2018, pp. 276-287.
DOI Link
1804
BibRef
Kuncheva, L.I.[Ludmila I.],
Yousefi, P.[Paria],
Almeida, J.[Jurandy],
Edited nearest neighbour for selecting keyframe summaries of
egocentric videos,
JVCIR(52), 2018, pp. 118-130.
Elsevier DOI
1804
BibRef
And:
Comparing keyframe summaries of egocentric videos:
Closest-to-centroid baseline,
IPTA17(1-6)
IEEE DOI
1804
Keyframe summary, Nearest neighbour classifier,
Instance selection, Egocentric video, Feature representations.
feature extraction, image classification, image colour analysis,
image representation, image segmentation, image texture,
Video summarisation
BibRef
Silva, M.M.[Michel M.],
Ramos, W.L.S.[Washington L.S.],
Chamone, F.C.[Felipe C.],
Ferreira, J.P.K.[João P.K.],
Campos, M.F.M.[Mario F.M.],
Nascimento, E.R.[Erickson R.],
Making a long story short: A multi-importance fast-forwarding
egocentric videos with the emphasis on relevant objects,
JVCIR(53), 2018, pp. 55-64.
Elsevier DOI
1805
Semantic information, First-person video, Fast-forward, Egocentric stabilization
BibRef
Kwon, H.[Heeseung],
Kim, Y.[Yeonho],
Lee, J.S.[Jin S.],
Cho, M.[Minsu],
First Person Action Recognition via Two-stream ConvNet with Long-term
Fusion Pooling,
PRL(112), 2018, pp. 161-167.
Elsevier DOI
1809
First person action recognition, Action recognition,
Convolutional neural network, Feature pooling
BibRef
Yu, H.K.[Hong-Kai],
Yu, H.Z.[Hao-Zhou],
Guo, H.[Hao],
Simmons, J.[Jeff],
Zou, Q.[Qin],
Feng, W.[Wei],
Wang, S.[Song],
Multiple human tracking in wearable camera videos with
informationless intervals,
PRL(112), 2018, pp. 104-110.
Elsevier DOI
1809
Wearable cameras, Multiple human tracking, Informationless interval
BibRef
Yonetani, R.[Ryo],
Kitani, K.M.[Kris M.],
Sato, Y.[Yoichi],
Ego-Surfing: Person Localization in First-Person Videos Using
Ego-Motion Signatures,
PAMI(40), No. 11, November 2018, pp. 2749-2761.
IEEE DOI
1810
BibRef
Earlier:
Recognizing Micro-Actions and Reactions from Paired Egocentric Videos,
CVPR16(2629-2638)
IEEE DOI
1612
BibRef
Earlier:
Ego-surfing first person videos,
CVPR15(5445-5454)
IEEE DOI
1510
Videos, Cameras, Observers, Correlation, Trajectory, Magnetic heads,
Face, First-person video, people identification, dense trajectory
BibRef
Yagi, T.,
Mangalam, K.,
Yonetani, R.,
Sato, Y.,
Future Person Localization in First-Person Videos,
CVPR18(7593-7602)
IEEE DOI
1812
Videos, Cameras, Legged locomotion, Task analysis, Visualization,
History, Trajectory
BibRef
Hudson, F.,
Clark, C.,
Wearables and Medical Interoperability: The Evolving Frontier,
Computer(51), No. 9, September 2018, pp. 86-90.
IEEE DOI
1810
Wearable computing, Medical devices, Internet of Things, Implants,
Standards, standards, medical devices, wearables,
TIPPSS
BibRef
Ardeshir, S.[Shervin],
Borji, A.[Ali],
An exocentric look at egocentric actions and vice versa,
CVIU(171), 2018, pp. 61-68.
Elsevier DOI
1812
Egocentric vision, Transfer learning, Action recognition,
BibRef
Greene, D.,
Patterson, G.,
Can we trust computer with body-cam vidio? Police departments are
being led to believe AI will help, but they should be wary,
Spectrum(55), No. 12, December 2018, pp. 36-48.
IEEE DOI
1812
Artificial intelligence, Law enforcement, Axons, Cameras, Computers,
Birds, Face
BibRef
Guo, M.,
Wang, Z.,
Yang, N.,
Li, Z.,
An, T.,
A Multisensor Multiclassifier Hierarchical Fusion Model Based on
Entropy Weight for Human Activity Recognition Using Wearable Inertial
Sensors,
HMS(49), No. 1, February 2019, pp. 105-111.
IEEE DOI
1901
Feature extraction, Biomedical monitoring, Wearable sensors,
Activity recognition, Monitoring, Sensor fusion, Body area network,
wearable system
BibRef
Ardeshir, S.[Shervin],
Borji, A.[Ali],
Egocentric Meets Top-View,
PAMI(41), No. 6, June 2019, pp. 1353-1366.
IEEE DOI
1905
Combine the two.
Videos, Cameras, Surveillance, Task analysis,
Visualization, Object tracking, Egocentric vision,
person re-identification
BibRef
Cabrera-Quiros, L.,
Hung, H.,
A Hierarchical Approach for Associating Body-Worn Sensors to Video
Regions in Crowded Mingling Scenarios,
MultMed(21), No. 7, July 2019, pp. 1867-1879.
IEEE DOI
1906
Acceleration, Wearable sensors, Streaming media, Task analysis,
Cameras, Mingling, wearable sensor,
association
BibRef
Lu, M.L.[Min-Long],
Li, Z.N.[Ze-Nian],
Wang, Y.M.[Yue-Ming],
Pan, G.[Gang],
Deep Attention Network for Egocentric Action Recognition,
IP(28), No. 8, August 2019, pp. 3703-3713.
IEEE DOI
1907
cameras, gaze tracking, image motion analysis, neural nets,
video signal processing, video frames, temporal network,
LSTM
BibRef
Ning, Z.[Zhuang],
Zhang, Q.A.[Qi-Ang],
Shen, Y.[Yang],
Li, Z.F.[Ze-Fan],
Ni, B.B.[Bing-Bing],
Zhang, W.J.[Wen-Jun],
Long term activity prediction in first person viewpoint,
PRL(125), 2019, pp. 708-714.
Elsevier DOI
1909
Egocentric video, Prediction, Gaze, Attention, Asynchronous
BibRef
Cruz, S.[Sergio],
Chan, A.B.[Antoni B.],
Is that my hand? An egocentric dataset for hand disambiguation,
IVC(89), 2019, pp. 131-143.
Elsevier DOI
1909
Egocentric perspective, Hand detection
BibRef
Tang, Y.,
Wang, Z.,
Lu, J.,
Feng, J.,
Zhou, J.,
Multi-Stream Deep Neural Networks for RGB-D Egocentric Action
Recognition,
CirSysVideo(29), No. 10, October 2019, pp. 3001-3015.
IEEE DOI
1910
cameras, gesture recognition, image motion analysis,
image recognition, image sequences,
deep learning
BibRef
Ye, J.[Jun],
Qi, G.J.[Guo-Jun],
Zhuang, N.F.[Nai-Fan],
Hu, H.[Hao],
Hua, K.A.[Kien A.],
Learning Compact Features for Human Activity Recognition Via
Probabilistic First-Take-All,
PAMI(42), No. 1, January 2020, pp. 126-139.
IEEE DOI
1912
Feature extraction, Probabilistic logic,
Recurrent neural networks, Heuristic algorithms,
learning to hash
BibRef
Purwanto, D.,
Chen, Y.,
Fang, W.,
First-Person Action Recognition With Temporal Pooling and
Hilbert-Huang Transform,
MultMed(21), No. 12, December 2019, pp. 3122-3135.
IEEE DOI
1912
Videos, Feature extraction, Trajectory, Transforms, Cameras,
Adaptation models, Training, Hilbert-Huang transform,
action recognition
BibRef
Rhinehart, N.,
Kitani, K.M.[Kris M.],
First-Person Activity Forecasting from Video with Online Inverse
Reinforcement Learning,
PAMI(42), No. 2, February 2020, pp. 304-317.
IEEE DOI
2001
BibRef
Earlier:
First-Person Activity Forecasting with Online Inverse Reinforcement
Learning,
ICCV17(3716-3725)
IEEE DOI
1802
Award, Marr Prize, HM.
BibRef
Earlier:
Learning Action Maps of Large Environments via First-Person Vision,
CVPR16(580-588)
IEEE DOI
1612
Forecasting, Task analysis, Predictive models, Trajectory, Cameras,
Learning (artificial intelligence), Visualization,
online learning.
learning (artificial intelligence),
DARKO, Online Inverse Reinforcement Learning approach,
BibRef
Guan, J.,
Yuan, Y.,
Kitani, K.M.,
Rhinehart, N.,
Generative Hybrid Representations for Activity Forecasting With
No-Regret Learning,
CVPR20(170-179)
IEEE DOI
2008
Trajectory, Forecasting, Predictive models, Entropy, Videos,
Hybrid power systems, Adaptation models
BibRef
Ragusa, F.[Francesco],
Furnari, A.[Antonino],
Battiato, S.[Sebastiano],
Signorello, G.[Giovanni],
Farinella, G.M.[Giovanni Maria],
EGO-CH: Dataset and fundamental tasks for visitors behavioral
understanding using egocentric vision,
PRL(131), 2020, pp. 150-157.
Elsevier DOI
2004
Egocentric vision, First person vision, Localization,
Object detection, Object retrieval
BibRef
Kim, Y.J.[Ye-Ji],
Lee, D.G.[Dong-Gyu],
Lee, S.W.[Seong-Whan],
Three-stream fusion network for first-person interaction recognition,
PR(103), 2020, pp. 107279.
Elsevier DOI
2005
First-person vision, First-person interaction recognition,
Three-stream fusion network, Three-stream correlation fusion, Camera ego-motion
BibRef
Talavera, E.[Estefania],
Wuerich, C.[Carolin],
Petkov, N.[Nicolai],
Radeva, P.[Petia],
Topic modelling for routine discovery from egocentric photo-streams,
PR(104), 2020, pp. 107330.
Elsevier DOI
2005
BibRef
Earlier: A1, A3, A4, Only:
Unsupervised Routine Discovery in Egocentric Photo-Streams,
CAIP19(I:576-588).
Springer DOI
1909
Routine, Egocentric vision, Lifestyle, Behaviour analysis, Topic modelling
BibRef
Andrea Orlando, S.[Santi],
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Egocentric visitor localization and artwork detection in cultural
sites using synthetic data,
PRL(133), 2020, pp. 17-24.
Elsevier DOI
2005
Image-based localization, Artwork detection, Similarity search, Simulated data
BibRef
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Rolling-Unrolling LSTMs for Action Anticipation from First-Person
Video,
PAMI(43), No. 11, November 2021, pp. 4021-4036.
IEEE DOI
2110
Task analysis, Streaming media, Encoding,
Predictive models, Containers, Cameras, LSTM
BibRef
Sahu, A.[Abhimanyu],
Chowdhury, A.S.[Ananda S.],
Multiscale summarization and action ranking in egocentric videos,
PRL(133), 2020, pp. 256-263.
Elsevier DOI
2005
Egocentric video analysis, Hierarchical clustering,
Multiscale summary, Action ranking
BibRef
Sahu, A.[Abhimanyu],
Chowdhury, A.S.[Ananda S.],
First person video summarization using different graph
representations,
PRL(146), 2021, pp. 185-192.
Elsevier DOI
2105
First-person video, Center-surround model,
Spectral graph dissimilarity, Video similarity graph,
Edge inadmissibility measure
BibRef
Thomanek, R.,
Roschke, C.,
Zimmer, F.,
Rolletschke, T.,
Manthey, R.,
Vodel, M.,
Platte, B.,
Heinzig, M.,
Eibl, M.,
Hosel, C.,
Vogel, R.,
Ritter, M.,
Real-Time Activity Detection of Human Movement in Videos via
Smartphone Based on Synthetic Training Data,
WACVWS20(160-164)
IEEE DOI
2006
Videos, Databases, Animation, Real-time systems, Training,
Mobile handsets
BibRef
Huang, Y.,
Cai, M.,
Sato, Y.,
An Ego-Vision System for Discovering Human Joint Attention,
HMS(50), No. 4, August 2020, pp. 306-316.
IEEE DOI
2007
Videos, Task analysis, Visualization, Cameras, Image segmentation,
Proposals, Noise measurement, Dataset, egocentric vision,
joint attention
BibRef
Liu, T.S.[Tian-Shan],
Zhao, R.[Rui],
Xiao, J.[Jun],
Lam, K.M.[Kin-Man],
Progressive Motion Representation Distillation With Two-Branch
Networks for Egocentric Activity Recognition,
SPLetters(27), 2020, pp. 1320-1324.
IEEE DOI
2008
Optical imaging, Optical network units,
Optical signal processing, Optical losses, Training, Measurement,
motion representation
BibRef
Liu, T.S.[Tian-Shan],
Lam, K.M.[Kin-Man],
Zhao, R.[Rui],
Kong, J.[Jun],
Enhanced Attention Tracking With Multi-Branch Network for Egocentric
Activity Recognition,
CirSysVideo(32), No. 6, June 2022, pp. 3587-3602.
IEEE DOI
2206
Activity recognition, Cams, Videos, Feature extraction,
Optical imaging, Semantics, Egocentric activity recognition,
fine-grained hand-object interactions
BibRef
Liu, T.S.[Tian-Shan],
Lam, K.M.[Kin-Man],
Flow-guided Spatial Attention Tracking for Egocentric Activity
Recognition,
ICPR21(4303-4308)
IEEE DOI
2105
Integrated optics, Deep learning, Tracking, Fuses, Video sequences,
Activity recognition, Benchmark testing
BibRef
Qi, W.,
Su, H.,
Aliverti, A.,
A Smartphone-Based Adaptive Recognition and Real-Time Monitoring
System for Human Activities,
HMS(50), No. 5, October 2020, pp. 414-423.
IEEE DOI
2009
Feature extraction, Sensors, Accelerometers, Gyroscopes,
Real-time systems, Classification algorithms, Adaptive systems,
human activity recognition (HAR)
BibRef
Ismail, K.,
Özacar, K.,
Human Activity Recognition Based on Smartphone Sensor Data Using CNN,
SmartCityApp20(263-265).
DOI Link
2012
BibRef
Wu, Y.[Yu],
Zhu, L.C.[Lin-Chao],
Wang, X.O.[Xia-Ohan],
Yang, Y.[Yi],
Wu, F.[Fei],
Learning to Anticipate Egocentric Actions by Imagination,
IP(30), 2021, pp. 1143-1152.
IEEE DOI
2012
Task analysis, Uncertainty, Predictive models, Visualization,
Recurrent neural networks, Image segmentation, Image recognition,
egocentric videos
BibRef
Oh, C.,
Cavallaro, A.,
View-Action Representation Learning for Active First-Person Vision,
CirSysVideo(31), No. 2, February 2021, pp. 480-491.
IEEE DOI
2102
Navigation, Cameras, Predictive models, Visualization, Task analysis,
Image reconstruction, Neural networks, Representation learning,
mapless navigation
BibRef
Gedik, E.,
Cabrera-Quiros, L.,
Martella, C.,
Englebienne, G.,
Hung, H.,
Towards Analyzing and Predicting the Experience of Live Performances
with Wearable Sensing,
AffCom(12), No. 1, January 2021, pp. 269-276.
IEEE DOI
2103
Sensors, Motion pictures, Physiology, Accelerometers, Couplings,
Appraisal, Atmospheric measurements, Human behavior, dance
BibRef
Edwards, J.,
Wearables-Fashion With a Purpose: A New Generation of Wearable
Devices Uses Signal Processing to Make Life Easier, Healthier, and
More Secure [Special Reports],
SPMag(38), No. 2, March 2021, pp. 15-136.
IEEE DOI
2103
Performance evaluation, Wearable computers, Mobile handsets,
Biomedical monitoring, Usability, Monitoring
BibRef
Sahu, A.,
Chowdhury, A.S.,
Together Recognizing, Localizing and Summarizing Actions in
Egocentric Videos,
IP(30), 2021, pp. 4330-4340.
IEEE DOI
2104
BibRef
Earlier:
Shot Level Egocentric Video Co-summarization,
ICPR18(2887-2892)
IEEE DOI
1812
Videos, Location awareness, Cameras, Feature extraction,
Streaming media, Sports, Trajectory,
fractional knapsack.
Visualization, Entropy, Computational modeling, Nash equilibrium,
Visual databases, Bipartite graph, Games, Co-summarization,
Bipartite graph matching
BibRef
Rueda, F.M.[Fernando Moya],
Fink, G.A.[Gernot A.],
From Human Pose to On-Body Devices for Human-Activity Recognition,
ICPR21(10066-10073)
IEEE DOI
2105
Performance evaluation, Transfer learning, Deep architecture,
Estimation, Benchmark testing, Activity recognition, Synchronization
BibRef
Rodin, I.[Ivan],
Furnari, A.[Antonino],
Mavroeidis, D.[Dimitrios],
Farinella, G.M.[Giovanni Maria],
Predicting the future from first person (egocentric) vision: A survey,
CVIU(211), 2021, pp. 103252.
Elsevier DOI
2110
Survey, Egocentric Video. First person vision, Egocentric vision, Future prediction, Anticipation
BibRef
Gökce, Z.[Zeynep],
Pehlivan, S.[Selen],
Temporal modelling of first-person actions using hand-centric verb
and object streams,
SP:IC(99), 2021, pp. 116436.
Elsevier DOI
2111
First-person vision, Egocentric vision, Action recognition,
Temporal models, RNN
BibRef
Damen, D.[Dima],
Doughty, H.[Hazel],
Farinella, G.M.[Giovanni Maria],
Furnari, A.[Antonino],
Kazakos, E.[Evangelos],
Ma, J.[Jian],
Moltisanti, D.[Davide],
Munro, J.[Jonathan],
Perrett, T.[Toby],
Price, W.[Will],
Wray, M.[Michael],
Rescaling Egocentric Vision: Collection, Pipeline and Challenges for
EPIC-KITCHENS-100,
IJCV(130), No. 1, January 2022, pp. 33-55.
Springer DOI
2201
Dataset, Egocentric Actions.
BibRef
Damen, D.[Dima],
Doughty, H.[Hazel],
Farinella, G.M.[Giovanni Maria],
Fidler, S.[Sanja],
Furnari, A.[Antonino],
Kazakos, E.[Evangelos],
Moltisanti, D.[Davide],
Munro, J.[Jonathan],
Perrett, T.[Toby],
Price, W.[Will],
Wray, M.[Michael],
Scaling Egocentric Vision: The Epic Kitchens Dataset,
ECCV18(II: 753-771).
Springer DOI
1810
Dataset, Egocentric Actions.
BibRef
Aakur, S.N.[Sathyanarayanan N.],
Kundu, S.[Sanjoy],
Gunti, N.[Nikhil],
Knowledge guided learning: Open world egocentric action recognition
with zero supervision,
PRL(156), 2022, pp. 38-45.
Elsevier DOI
2205
Event understanding, Open world reasoning, Pattern Theory,
Egocentric activity understanding, Commonsense Knowledge
BibRef
Huang, Y.[Yi],
Yang, X.S.[Xiao-Shan],
Gao, J.Y.[Jun-Yun],
Xu, C.S.[Chang-Sheng],
Holographic Feature Learning of Egocentric-Exocentric Videos for
Multi-Domain Action Recognition,
MultMed(24), 2022, pp. 2273-2286.
IEEE DOI
2205
Videos, Feature extraction, Visualization, Task analysis,
Computational modeling, Target recognition, Prototypes,
action recognition
BibRef
Han, R.Z.[Rui-Ze],
Feng, W.[Wei],
Zhang, Y.J.[Yu-Jun],
Zhao, J.W.[Jie-Wen],
Wang, S.[Song],
Multiple Human Association and Tracking From Egocentric and
Complementary Top Views,
PAMI(44), No. 9, September 2022, pp. 5225-5242.
IEEE DOI
2208
Cameras, Videos, Collaboration, Graphical models,
Distribution functions, Trajectory, Optimization, egocentric perception
BibRef
Han, R.Z.[Rui-Ze],
Yan, H.M.[Hao-Min],
Li, J.C.[Jia-Cheng],
Wang, S.M.[Song-Miao],
Feng, W.[Wei],
Wang, S.[Song],
Panoramic Human Activity Recognition,
ECCV22(IV:244-261).
Springer DOI
2211
BibRef
Bianco, S.[Simone],
Napoletano, P.[Paolo],
Raimondi, A.[Alberto],
Rima, M.[Mirko],
U-WeAr: User Recognition on Wearable Devices through Arm Gesture,
HMS(52), No. 4, August 2022, pp. 713-724.
IEEE DOI
2208
Gesture recognition, Accelerometers, Inertial sensors,
Hidden Markov models, Gyroscopes, Task analysis, Servers,
user verification
BibRef
Xia, Y.W.[Yi-Wei],
Ma, J.[Jun],
Yu, C.Y.[Chu-Yue],
Ren, X.H.[Xun-Huan],
Antonovich, B.A.[Boriskevich Anatoliy],
Tsviatkou, V.Y.[Viktar Yurevich],
Recognition System Of Human Activities Based On Time-Frequency
Features Of Accelerometer Data,
ISCV22(1-5)
IEEE DOI
2208
Accelerometers, Support vector machines,
Microelectromechanical systems, Industries, WISDM dataset
BibRef
Lee, K.[Kyoungoh],
Park, Y.[Yeseung],
Huh, J.[Jungwoo],
Kang, J.[Jiwoo],
Lee, S.H.[Sang-Hoon],
Self-Updatable Database System Based on Human Motion Assessment
Framework,
CirSysVideo(32), No. 10, October 2022, pp. 7160-7176.
IEEE DOI
2210
Databases, Surveillance, Videos, Behavioral sciences, Semantics,
Motion measurement, Human motion assessment,
self-updatable database system
BibRef
Wu, R.J.[Ren-Jie],
Lee, B.G.[Boon Giin],
Pike, M.[Matthew],
Zhu, L.Z.[Lin-Zhen],
Chai, X.Q.[Xiao-Qing],
Huang, L.[Liang],
Wu, X.[Xian],
IOAM: A Novel Sensor Fusion-Based Wearable for Localization and
Mapping,
RS(14), No. 23, 2022, pp. xx-yy.
DOI Link
2212
BibRef
Li, H.X.[Hao-Xin],
Zheng, W.S.[Wei-Shi],
Zhang, J.G.[Jian-Guo],
Hu, H.F.[Hai-Feng],
Lu, J.W.[Ji-Wen],
Lai, J.H.[Jian-Huang],
Egocentric Action Recognition by Automatic Relation Modeling,
PAMI(45), No. 1, January 2023, pp. 489-507.
IEEE DOI
2212
Videos, Cameras, Task analysis, Solid modeling, Context modeling,
Location awareness, Feature extraction,
human-object interaction recognition
BibRef
Li, H.X.[Hao-Xin],
Cai, Y.J.[Yi-Jun],
Zheng, W.S.[Wei-Shi],
Deep Dual Relation Modeling for Egocentric Interaction Recognition,
CVPR19(7924-7933).
IEEE DOI
2002
BibRef
Dunnhofer, M.[Matteo],
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Micheloni, C.[Christian],
Visual Object Tracking in First Person Vision,
IJCV(131), No. 1, January 2023, pp. 259-283.
Springer DOI
2301
BibRef
Earlier:
Is First Person Vision Challenging for Object Tracking?,
VOT21(2698-2710)
IEEE DOI
2112
Visualization, Target tracking, Systematics,
Video sequences, Object detection, Benchmark testing
BibRef
Dhamanaskar, A.[Ameya],
Dimiccoli, M.[Mariella],
Corona, E.[Enric],
Pumarola, A.[Albert],
Moreno-Noguer, F.[Francesc],
Enhancing egocentric 3D pose estimation with third person views,
PR(138), 2023, pp. 109358.
Elsevier DOI
2303
3D pose estimation, Self-supervised learning, Egocentric vision
BibRef
Li, Y.[Yin],
Liu, M.[Miao],
Rehg, J.M.[James M.],
In the Eye of the Beholder: Gaze and Actions in First Person Video,
PAMI(45), No. 6, June 2023, pp. 6731-6747.
IEEE DOI
2305
BibRef
Earlier:
In the Eye of Beholder: Joint Learning of Gaze and Actions in First
Person Video,
ECCV18(VI: 639-655).
Springer DOI
1810
Convolution, Visualization, Cameras, Benchmark testing,
Stochastic processes, Gaze tracking, Action recognition,
video analysis
BibRef
Wang, X.H.[Xiao-Han],
Zhu, L.C.[Lin-Chao],
Wu, Y.[Yu],
Yang, Y.[Yi],
Symbiotic Attention for Egocentric Action Recognition With
Object-Centric Alignment,
PAMI(45), No. 6, June 2023, pp. 6605-6617.
IEEE DOI
2305
Feature extraction, Cognition, Symbiosis, Task analysis, Solid modeling,
Egocentric video analysis, action recognition, symbiotic attention
BibRef
Kapidis, G.[Georgios],
Poppe, R.[Ronald],
Veltkamp, R.C.[Remco C.],
Multi-Dataset, Multitask Learning of Egocentric Vision Tasks,
PAMI(45), No. 6, June 2023, pp. 6618-6630.
IEEE DOI
2305
Task analysis, Training, Feature extraction, Activity recognition,
Object detection, Annotations, Training data, Egocentric vision,
multitask learning
BibRef
Yu, H.Y.[Huang-Yue],
Cai, M.J.[Min-Jie],
Liu, Y.F.[Yun-Fei],
Lu, F.[Feng],
First- And Third-Person Video Co-Analysis By Learning
Spatial-Temporal Joint Attention,
PAMI(45), No. 6, June 2023, pp. 6631-6646.
IEEE DOI
2305
Cameras, Task analysis, Feature extraction, Visualization,
Robot vision systems, Standards, Egocentric perception,
deep learning
BibRef
Qi, Z.B.[Zhao-Bo],
Wang, S.H.[Shu-Hui],
Su, C.[Chi],
Su, L.[Li],
Huang, Q.M.[Qing-Ming],
Tian, Q.[Qi],
Self-Regulated Learning for Egocentric Video Activity Anticipation,
PAMI(45), No. 6, June 2023, pp. 6715-6730.
IEEE DOI
2305
Predictive models, Dairy products, Semantics, Feature extraction,
Visualization, Activity recognition, Task analysis,
self-regulated learning
BibRef
Martín-Martín, R.[Roberto],
Patel, M.[Mihir],
Rezatofighi, H.[Hamid],
Shenoi, A.[Abhijeet],
Gwak, J.Y.[Jun-Young],
Frankel, E.[Eric],
Sadeghian, A.[Amir],
Savarese, S.[Silvio],
JRDB: A Dataset and Benchmark of Egocentric Robot Visual Perception
of Humans in Built Environments,
PAMI(45), No. 6, June 2023, pp. 6748-6765.
IEEE DOI
2305
Robots, Sensors, Annotations, Cameras, Benchmark testing,
Robot navigation, social robotics, person detection, person tracking
BibRef
Northcutt, C.G.[Curtis G.],
Zha, S.X.[Sheng-Xin],
Lovegrove, S.[Steven],
Newcombe, R.[Richard],
EgoCom: A Multi-Person Multi-Modal Egocentric Communications Dataset,
PAMI(45), No. 6, June 2023, pp. 6783-6793.
IEEE DOI
2305
Task analysis, Artificial intelligence, Visualization,
Synchronization, Natural languages, Education, Egocentric,
embodied intelligence
BibRef
Tome, D.[Denis],
Alldieck, T.[Thiemo],
Peluse, P.[Patrick],
Pons-Moll, G.[Gerard],
Agapito, L.[Lourdes],
Badino, H.[Hernan],
de la Torre, F.[Fernando],
SelfPose: 3D Egocentric Pose Estimation From a Headset Mounted Camera,
PAMI(45), No. 6, June 2023, pp. 6794-6806.
IEEE DOI
2305
Cameras, Pose estimation, Visualization, Head, Training,
3D human pose estimation, egocentric, VR/AR, character animation
BibRef
Coskun, H.[Huseyin],
Zia, M.Z.[M. Zeeshan],
Tekin, B.[Bugra],
Bogo, F.[Federica],
Navab, N.[Nassir],
Tombari, F.[Federico],
Sawhney, H.S.[Harpreet S.],
Domain-Specific Priors and Meta Learning for Few-Shot First-Person
Action Recognition,
PAMI(45), No. 6, June 2023, pp. 6659-6673.
IEEE DOI
2305
Visualization, Training, Transfer learning, Task analysis,
Streaming media, Feature extraction, Meta learning,
attention
BibRef
Dessalene, E.[Eadom],
Devaraj, C.[Chinmaya],
Maynord, M.[Michael],
Fermüller, C.[Cornelia],
Aloimonos, Y.F.[Yi-Fannis],
Forecasting Action Through Contact Representations From First Person
Video,
PAMI(45), No. 6, June 2023, pp. 6703-6714.
IEEE DOI
2305
Task analysis, Semantics, Object segmentation, Trajectory,
Visualization, Predictive models, Feeds, Action anticipation, hands
BibRef
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Streaming egocentric action anticipation: An evaluation scheme and
approach,
CVIU(234), 2023, pp. 103763.
Elsevier DOI
2307
Action anticipation, Egocentric vision, Streaming perception
BibRef
Ragusa, F.[Francesco],
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
MECCANO: A multimodal egocentric dataset for humans behavior
understanding in the industrial-like domain,
CVIU(235), 2023, pp. 103764.
Elsevier DOI
2310
First person vision, Egocentric vision,
Multimodal dataset, Human Behavior Understanding
BibRef
Saganowski, S.[Stanislaw],
Perz, B.[Bartosz],
Polak, A.G.[Adam G.],
Kazienko, P.[Przemyslaw],
Emotion Recognition for Everyday Life Using Physiological Signals
From Wearables: A Systematic Literature Review,
AffCom(14), No. 3, July 2023, pp. 1876-1897.
IEEE DOI
2310
BibRef
Guan, W.L.[Wei-Li],
Song, X.M.[Xue-Meng],
Wang, K.J.[Ke-Jie],
Wen, H.K.[Hao-Kun],
Ni, H.[Hongda],
Wang, Y.W.[Yao-Wei],
Chang, X.J.[Xiao-Jun],
Egocentric Early Action Prediction via Multimodal Transformer-Based
Dual Action Prediction,
CirSysVideo(33), No. 9, September 2023, pp. 4472-4483.
IEEE DOI Code:
WWW Link.
2310
BibRef
Liu, X.Y.[Xian-Yuan],
Zhou, S.[Shuo],
Lei, T.[Tao],
Jiang, P.[Ping],
Chen, Z.X.[Zhi-Xiang],
Lu, H.P.[Hai-Ping],
First-Person Video Domain Adaptation With Multi-Scene Cross-Site
Datasets and Attention-Based Methods,
CirSysVideo(33), No. 12, December 2023, pp. 7774-7788.
IEEE DOI
2312
BibRef
Liu, Y.X.[Yu-Xuan],
Yang, J.X.[Jian-Xin],
Gu, X.[Xiao],
Chen, Y.J.[Yi-Jun],
Guo, Y.[Yao],
Yang, G.Z.[Guang-Zhong],
EgoFish3D: Egocentric 3D Pose Estimation From a Fisheye Camera via
Self-Supervised Learning,
MultMed(25), 2023, pp. 8880-8891.
IEEE DOI
2312
BibRef
Xu, L.F.[Lin-Feng],
Wu, Q.B.[Qing-Bo],
Pan, L.[Lili],
Meng, F.M.[Fan-Man],
Li, H.L.[Hong-Liang],
He, C.[Chiyuan],
Wang, H.X.[Han-Xin],
Cheng, S.X.[Shao-Xu],
Dai, Y.[Yu],
Towards Continual Egocentric Activity Recognition: A Multi-Modal
Egocentric Activity Dataset for Continual Learning,
MultMed(26), 2024, pp. 2430-2443.
IEEE DOI Code:
WWW Link.
2402
Cameras, Task analysis, Gyroscopes, Glass, Feature extraction,
Accelerometers, Multi-modal dataset,
wearable device
BibRef
Wang, H.X.[Han-Xin],
Zhou, S.C.[Shu-Chang],
Wu, Q.B.[Qing-Bo],
Li, H.L.[Hong-Liang],
Meng, F.M.[Fan-Man],
Xu, L.F.[Lin-Feng],
Qiu, H.Q.[He-Qian],
Confusion Mixup Regularized Multimodal Fusion Network for Continual
Egocentric Activity Recognition,
VCL23(3552-3561)
IEEE DOI Code:
WWW Link.
2401
BibRef
Luo, H.C.[Hong-Chen],
Zhai, W.[Wei],
Zhang, J.[Jing],
Cao, Y.[Yang],
Tao, D.C.[Da-Cheng],
Grounded Affordance from Exocentric View,
IJCV(132), No. 6, June 2024, pp. 1945-1969.
Springer DOI
2406
locate objects' action possibilities regions.
BibRef
Wang, H.R.[Hao-Ran],
Yang, J.H.[Jia-Hao],
Yu, B.[Baosheng],
Zhan, Y.B.[Yi-Bing],
Tao, D.P.[Da-Peng],
Ling, H.B.[Hai-Bin],
Distilling interaction knowledge for semi-supervised egocentric
action recognition,
PR(157), 2025, pp. 110927.
Elsevier DOI
2409
Knowledge distillation, Semi-supervised learning, Egocentric action recognition
BibRef
Dai, Y.[Yudi],
Wang, Z.Y.[Zhi-Yong],
Lin, X.P.[Xi-Ping],
Wen, C.L.[Cheng-Lu],
Xu, L.[Lan],
Shen, S.Q.[Si-Qi],
Ma, Y.X.[Yue-Xin],
Wang, C.[Cheng],
HiSC4D: Human-Centered Interaction and 4D Scene Capture in
Large-Scale Space Using Wearable IMUs and LiDAR,
PAMI(46), No. 12, December 2024, pp. 11236-11253.
IEEE DOI
2411
Laser radar, Cameras, Sensors, Location awareness, Accuracy,
Point cloud compression, 3D computer vision, dataset,
scene mapping
BibRef
Dai, Y.[Yudi],
Lin, Y.[Yitai],
Wen, C.L.[Cheng-Lu],
Shen, S.Q.[Si-Qi],
Xu, L.[Lan],
Yu, J.Y.[Jing-Yi],
Ma, Y.X.[Yue-Xin],
Wang, C.[Cheng],
HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor
Space Using Wearable IMUs and LiDAR,
CVPR22(6782-6792)
IEEE DOI
2210
Location awareness, Laser radar, Dynamics, Stairs,
Robot sensing systems, Motion capture, Sensors, Motion and tracking
BibRef
Wang, H.R.[Hao-Ran],
Cheng, Q.H.[Qing-Hua],
Yu, B.[Baosheng],
Zhan, Y.B.[Yi-Bing],
Tao, D.P.[Da-Peng],
Ding, L.[Liang],
Ling, H.B.[Hai-Bin],
Free-Form Composition Networks for Egocentric Action Recognition,
CirSysVideo(34), No. 10, October 2024, pp. 9967-9978.
IEEE DOI
2411
Feature extraction, Training data, Data models, Visualization,
Training, Semantics, Data mining, Compositional learning,
egocentric action recognition
BibRef
Plizzari, C.[Chiara],
Goletto, G.[Gabriele],
Furnari, A.[Antonino],
Bansal, S.[Siddhant],
Ragusa, F.[Francesco],
Farinella, G.M.[Giovanni Maria],
Damen, D.[Dima],
Tommasi, T.[Tatiana],
An Outlook into the Future of Egocentric Vision,
IJCV(132), No. 11, November 2024, pp. 4880-4936.
Springer DOI
2411
BibRef
Suveges, T.[Tamas],
McKenna, S.[Stephen],
Unsupervised mapping and semantic user localisation from first-person
monocular video,
PR(158), 2025, pp. 110923.
Elsevier DOI
2411
Egocentric (first-person) vision, Unsupervised Learning,
Mapping and localisation
BibRef
Wang, T.[Tai],
Mao, X.H.[Xiao-Han],
Zhu, C.M.[Chen-Ming],
Xu, R.[Runsen],
Lyu, R.Y.[Rui-Yuan],
Li, P.[Peisen],
Chen, X.[Xiao],
Zhang, W.W.[Wen-Wei],
Chen, K.[Kai],
Xue, T.F.[Tian-Fan],
Liu, X.H.[Xi-Hui],
Lu, C.[Cewu],
Lin, D.[Dahua],
Pang, J.M.[Jiang-Miao],
EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards
Embodied AI,
CVPR24(19757-19767)
IEEE DOI
2410
multi-modal, ego-centric 3D perception dataset and benchmark for
holistic 3D scene understanding.
Annotations, Databases, Semantics, Benchmark testing,
Robot sensing systems, Multi-Modal, Ego-centric, 3D Perception,
Dataset and Benchmark
BibRef
Lee, S.P.[Shih-Po],
Lu, Z.[Zijia],
Zhang, Z.K.[Ze-Kun],
Hoai, M.[Minh],
Elhamifar, E.[Ehsan],
Error Detection in Egocentric Procedural Task Videos,
CVPR24(18655-18666)
IEEE DOI Code:
WWW Link.
2410
Training, Graph convolutional networks, Computational modeling, Buildings,
Prototypes, Object detection, error detection, egocentric videos
BibRef
Flaborea, A.[Alessandro],
d'Amely-di Melendugno, G.M.[Guido Maria],
Plini, L.[Leonardo],
Scofano, L.[Luca],
de Matteis, E.[Edoardo],
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Galasso, F.[Fabio],
PREGO: Online Mistake Detection in PRocedural EGOcentric Videos,
CVPR24(18483-18492)
IEEE DOI Code:
WWW Link.
2410
Adaptation models, Computational modeling, Medical services,
Benchmark testing, Predictive models, Cognition,
In-context Learning
BibRef
Zhao, Y.H.[Yun-Han],
Ma, H.Y.[Hao-Yu],
Kong, S.[Shu],
Fowlkes, C.[Charless],
Instance Tracking in 3D Scenes from Egocentric Videos,
CVPR24(21933-21944)
IEEE DOI Code:
WWW Link.
2410
Protocols, Annotations, Benchmark testing, Cameras, Sensors,
egocentric videos, instance tracking in 3D
BibRef
Chen, C.[Changan],
Ashutosh, K.[Kumar],
Girdhar, R.[Rohit],
Harwath, D.[David],
Grauman, K.[Kristen],
SoundingActions: Learning How Actions Sound from Narrated Egocentric
Videos,
CVPR24(27242-27252)
IEEE DOI
2410
Training, Tail, Encoding, Streams, Videos
BibRef
Huang, Y.F.[Yi-Fei],
Chen, G.[Guo],
Xu, J.[Jilan],
Zhang, M.F.[Ming-Fang],
Yang, L.J.[Li-Jin],
Pei, B.Q.[Bao-Qi],
Zhang, H.J.[Hong-Jie],
Dong, L.[Lu],
Wang, Y.[Yali],
Wang, L.M.[Li-Min],
Qiao, Y.[Yu],
EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric
View of Procedural Activities in Real World,
CVPR24(22072-22086)
IEEE DOI Code:
WWW Link.
2410
Bridges, Annotations, Laboratories, Focusing, Benchmark testing,
Planning, egocentric video, cross-view,
video dataset
BibRef
Li, G.[Gen],
Zhao, K.[Kaifeng],
Zhang, S.W.[Si-Wei],
Lyu, X.Z.[Xiao-Zhong],
Dusmanu, M.[Mihai],
Zhang, Y.[Yan],
Pollefeys, M.[Marc],
Tang, S.[Siyu],
EgoGen: An Egocentric Synthetic Data Generator,
CVPR24(14497-14509)
IEEE DOI
2410
Visualization, Solid modeling, Pipelines, Training data,
Reinforcement learning, egocentric vision,
autonomous virtual humans
BibRef
Kukleva, A.[Anna],
Sener, F.[Fadime],
Remelli, E.[Edoardo],
Tekin, B.[Bugra],
Sauser, E.[Eric],
Schiele, B.[Bernt],
Ma, S.[Shugao],
X-MIC: Cross-Modal Instance Conditioning for Egocentric Action
Generalization,
CVPR24(26354-26363)
IEEE DOI Code:
WWW Link.
2410
Adaptation models, Visualization, Pipelines, Computer architecture,
adapters, prompts, generalization, action recognition
BibRef
Majumder, S.[Sagnik],
Al-Halah, Z.[Ziad],
Grauman, K.[Kristen],
Learning Spatial Features from Audio-Visual Correspondence in
Egocentric Videos,
CVPR24(27048-27058)
IEEE DOI Code:
WWW Link.
2410
Art, Spatial audio, Noise reduction, Feature extraction, Encoding,
Spatial audio denoising
BibRef
Zhang, Y.[Yuanhang],
Yang, S.[Shuang],
Shan, S.G.[Shi-Guang],
Chen, X.L.[Xi-Lin],
ES3: Evolving Self-Supervised Learning of Robust Audio-Visual Speech
Representations,
CVPR24(27059-27069)
IEEE DOI
2410
Visualization, Self-supervised learning, Manuals,
Benchmark testing, Data models,
representation learning
BibRef
Yang, Q.[Qi],
Nie, X.[Xing],
Li, T.[Tong],
Gao, P.F.[Peng-Fei],
Guo, Y.[Ying],
Zhen, C.[Cheng],
Yan, P.F.[Peng-Fei],
Xiang, S.M.[Shi-Ming],
Cooperation Does Matter: Exploring Multi-Order Bilateral Relations
for Audio-Visual Segmentation,
CVPR24(27124-27133)
IEEE DOI Code:
WWW Link.
2410
Visualization, Adaptation models, Terminology,
Bidirectional control, Transformers,
multi-modal
BibRef
Grauman, K.[Kristen],
Westbury, A.[Andrew],
Torresani, L.[Lorenzo],
Kitani, K.[Kris],
Malik, J.[Jitendra],
Afouras, T.[Triantafyllos],
Ashutosh, K.[Kumar],
Baiyya, V.[Vijay],
Bansal, S.[Siddhant],
Boote, B.[Bikram],
Byrne, E.[Eugene],
Chavis, Z.[Zach],
Chen, J.[Joya],
Cheng, F.[Feng],
Chu, F.J.[Fu-Jen],
Crane, S.[Sean],
Dasgupta, A.[Avijit],
Dong, J.[Jing],
Escobar, M.[Maria],
Forigua, C.[Cristhian],
Gebreselasie, A.[Abrham],
Haresh, S.[Sanjay],
Huang, J.[Jing],
Islam, M.M.[Md Mohaiminul],
Jain, S.[Suyog],
Khirodkar, R.[Rawal],
Kukreja, D.[Devansh],
Liang, K.J.[Kevin J],
Liu, J.W.[Jia-Wei],
Majumder, S.[Sagnik],
Mao, Y.[Yongsen],
Martin, M.[Miguel],
Mavroudi, E.[Effrosyni],
Nagarajan, T.[Tushar],
Ragusa, F.[Francesco],
Ramakrishnan, S.K.[Santhosh Kumar],
Seminara, L.[Luigi],
Somayazulu, A.[Arjun],
Song, Y.[Yale],
Su, S.[Shan],
Xue, Z.[Zihui],
Zhang, E.[Edward],
Zhang, J.[Jinxu],
Castillo, A.[Angela],
Chen, C.[Changan],
Fu, X.Z.[Xin-Zhu],
Furuta, R.[Ryosuke],
González, C.[Cristina],
Gupta, P.[Prince],
Hu, J.[Jiabo],
Huang, Y.F.[Yi-Fei],
Huang, Y.M.[Yi-Ming],
Khoo, W.[Weslie],
Kumar, A.[Anush],
Kuo, R.[Robert],
Lakhavani, S.[Sach],
Liu, M.[Miao],
Luo, M.[Mi],
Luo, Z.Y.[Zheng-Yi],
Meredith, B.[Brighid],
Miller, A.[Austin],
Oguntola, O.[Oluwatumininu],
Pan, X.[Xiaqing],
Peng, P.[Penny],
Pramanick, S.[Shraman],
Ramazanova, M.[Merey],
Ryan, F.[Fiona],
Shan, W.[Wei],
Somasundaram, K.[Kiran],
Song, C.[Chenan],
Southerland, A.[Audrey],
Tateno, M.[Masatoshi],
Wang, H.Y.[Hui-Yu],
Wang, Y.C.[Yu-Chen],
Yagi, T.[Takuma],
Yan, M.F.[Ming-Fei],
Yang, X.T.[Xi-Tong],
Yu, Z.C.[Ze-Cheng],
Zha, S.X.C.[Sheng-Xin Cindy],
Zhao, C.[Chen],
Zhao, Z.W.[Zi-Wei],
Zhu, Z.[Zhifan],
Zhuo, J.[Jeff],
Arbeláez, P.[Pablo],
Bertasius, G.[Gedas],
Damen, D.[Dima],
Engel, J.[Jakob],
Farinella, G.M.[Giovanni Maria],
Furnari, A.[Antonino],
Ghanem, B.[Bernard],
Hoffman, J.[Judy],
Jawahar, C.V.,
Newcombe, R.[Richard],
Park, H.S.[Hyun Soo],
Rehg, J.M.[James M.],
Sato, Y.[Yoichi],
Savva, M.[Manolis],
Shi, J.B.[Jian-Bo],
Shout, M.Z.[Mike Zheng],
Wray, M.[Michael],
Ego-Exo4D: Understanding Skilled Human Activity from First- and
Third-Person Perspectives,
CVPR24(19383-19400)
IEEE DOI
2410
Representation learning, Point cloud compression, Solid modeling,
Urban areas, Pose estimation, Benchmark testing, video,
benchmarks
BibRef
Cuevas-Velasquez, H.[Hanz],
Hewitt, C.[Charlie],
Aliakbarian, S.[Sadegh],
Baltrušaitis, T.[Tadas],
SimpleEgo: Predicting Probabilistic Body Pose from Egocentric Cameras,
3DV24(1446-1455)
IEEE DOI Code:
WWW Link.
2408
Uncertainty, Pose estimation, Computer architecture, Resists,
Cameras, Probabilistic logic, Egocentric pose estimation, Probabilistic
BibRef
Hempel, T.[Thorsten],
Jung, M.[Magnus],
Abdelrahman, A.A.[Ahmed A.],
Al-Hamadi, A.[Ayoub],
NITEC: Versatile Hand-Annotated Eye Contact Dataset for Ego-Vision
Interaction,
WACV24(4425-4434)
IEEE DOI
2404
Human computer interaction, Computational modeling, Lighting,
Reproducibility of results, Algorithms, Datasets and evaluations, Robotics
BibRef
Thakur, S.[Sanket],
Beyan, C.[Cigdem],
Morerio, P.[Pietro],
Murino, V.[Vittorio],
del Bue, A.[Alessio],
Leveraging Next-Active Objects for Context-Aware Anticipation in
Egocentric Videos,
WACV24(8642-8651)
IEEE DOI
2404
Privacy, Tracking, Reviews, Dynamics, Predictive models, Transformers,
Time measurement, Applications, Virtual / augmented reality
BibRef
Nagar, P.[Pravin],
Shastry, K.N.A.[K. N Ajay],
Chaudhari, J.[Jayesh],
Arora, C.[Chetan],
SEMA: Semantic Attention for Capturing Long-Range Dependencies in
Egocentric Lifelogs,
WACV24(7010-7020)
IEEE DOI Code:
WWW Link.
2404
Location awareness, Training, Representation learning,
Visualization, Current transformers, Semantics, Redundancy,
Video recognition and understanding
BibRef
Roy, D.[Debaditya],
Rajendiran, R.[Ramanathan],
Fernando, B.[Basura],
Interaction Region Visual Transformer for Egocentric Action
Anticipation,
WACV24(6726-6736)
IEEE DOI
2404
Visualization, Codes, Computational modeling, Dynamics,
Computer architecture, Transformers, Algorithms,
Video recognition and understanding
BibRef
Duncan, S.[Stuart],
Regenbrecht, H.[Holger],
Langlotz, T.[Tobias],
Fusing exocentric and egocentric real-time reconstructions for
embodied immersive experiences,
IVCNZ23(1-11)
IEEE DOI
2403
Visualization, Teleconferencing,
Psychology, Mixed reality, User interfaces, Cameras, virtual reality,
embodiment
BibRef
Zhang, C.H.[Chu-Han],
Gupta, A.[Ankush],
Zisserman, A.[Andrew],
Helping Hands: An Object-Aware Ego-Centric Video Recognition Model,
ICCV23(13855-13866)
IEEE DOI
2401
BibRef
Apicella, T.[Tommaso],
Xompero, A.[Alessio],
Ragusa, E.[Edoardo],
Berta, R.[Riccardo],
Cavallaro, A.[Andrea],
Gastaldo, P.[Paolo],
Affordance segmentation of hand-occluded containers from exocentric
images,
ACVR23(1890-1899)
IEEE DOI
2401
BibRef
Khirodkar, R.[Rawal],
Bansal, A.[Aayush],
Ma, L.[Lingni],
Newcombe, R.[Richard],
Vo, M.[Minh],
Kitani, K.[Kris],
EgoHumans: An Egocentric 3D Multi-Human Benchmark,
ICCV23(19750-19762)
IEEE DOI
2401
BibRef
Wang, Q.T.[Qi-Tong],
Zhao, L.[Long],
Yuan, L.Z.[Liang-Zhe],
Liu, T.[Ting],
Peng, X.[Xi],
Learning from Semantic Alignment between Unpaired Multiviews for
Egocentric Video Recognition,
ICCV23(3284-3294)
IEEE DOI Code:
WWW Link.
2401
BibRef
Radevski, G.[Gorjan],
Grujicic, D.[Dusan],
Blaschko, M.[Matthew],
Moens, M.F.[Marie-Francine],
Tuytelaars, T.[Tinne],
Multimodal Distillation for Egocentric Action Recognition,
ICCV23(5190-5201)
IEEE DOI Code:
WWW Link.
2401
BibRef
Mur-Labadia, L.[Lorenzo],
Guerrero, J.J.[Jose J.],
Martinez-Cantin, R.[Ruben],
Multi-label affordance mapping from egocentric vision,
ICCV23(5215-5226)
IEEE DOI
2401
BibRef
Wang, H.Y.[Hui-Yu],
Singh, M.K.[Mitesh Kumar],
Torresani, L.[Lorenzo],
Ego-Only: Egocentric Action Detection without Exocentric Transferring,
ICCV23(5227-5238)
IEEE DOI
2401
BibRef
Pan, B.X.[Bo-Xiao],
Shen, B.[Bokui],
Rempe, D.[Davis],
Paschalidou, D.[Despoina],
Mo, K.[Kaichun],
Yang, Y.C.[Yan-Chao],
Guibas, L.J.[Leonidas J.],
COPILOT: Human-Environment Collision Prediction and Localization from
Egocentric Videos,
ICCV23(5239-5249)
IEEE DOI Code:
WWW Link.
2401
BibRef
Xu, Y.[Yue],
Li, Y.L.[Yong-Lu],
Huang, Z.[Zhemin],
Liu, M.X.[Michael Xu],
Lu, C.[Cewu],
Tai, Y.W.[Yu-Wing],
Tang, C.K.[Chi-Keung],
EgoPCA: A New Framework for Egocentric Hand-Object Interaction
Understanding,
ICCV23(5250-5261)
IEEE DOI Code:
WWW Link.
2401
BibRef
Pramanick, S.[Shraman],
Song, Y.[Yale],
Nag, S.[Sayan],
Lin, K.Q.[Kevin Qinghong],
Shah, H.[Hardik],
Shou, M.Z.[Mike Zheng],
Chellappa, R.[Rama],
Zhang, P.[Pengchuan],
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the
Backbone,
ICCV23(5262-5274)
IEEE DOI Code:
WWW Link.
2401
BibRef
Hazra, R.[Rishi],
Chen, B.[Brian],
Rai, A.[Akshara],
Kamra, N.[Nitin],
Desai, R.[Ruta],
EgoTV: Egocentric Task Verification from Natural Language Task
Descriptions,
ICCV23(15371-15383)
IEEE DOI
2401
BibRef
Zhu, C.C.[Chen-Chen],
Xiao, F.[Fanyi],
Alvarado, A.[Andrés],
Babaei, Y.[Yasmine],
Hu, J.[Jiabo],
El-Mohri, H.[Hichem],
Culatana, S.C.[Sean Chang],
Sumbaly, R.[Roshan],
Yan, Z.C.[Zhi-Cheng],
EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object
Understanding,
ICCV23(20053-20063)
IEEE DOI Code:
WWW Link.
2401
BibRef
Wang, X.[Xin],
Kwon, T.[Taein],
Rad, M.[Mahdi],
Pan, B.[Bowen],
Chakraborty, I.[Ishani],
Andrist, S.[Sean],
Bohus, D.[Dan],
Feniello, A.[Ashley],
Tekin, B.[Bugra],
Frujeri, F.V.[Felipe Vieira],
Joshi, N.[Neel],
Pollefeys, M.[Marc],
HoloAssist: an Egocentric Human Interaction Dataset for Interactive
AI Assistants in the Real World,
ICCV23(20213-20224)
IEEE DOI Code:
WWW Link.
2401
BibRef
Pan, X.Q.[Xia-Qing],
Charron, N.[Nicholas],
Yang, Y.Q.[Yong-Qian],
Peters, S.[Scott],
Whelan, T.[Thomas],
Kong, C.[Chen],
Parkhi, O.[Omkar],
Newcombe, R.[Richard],
Ren, Y.H.C.[Yu-Heng Carl],
Aria Digital Twin: A New Benchmark Dataset for Egocentric 3D Machine
Perception,
ICCV23(20076-20086)
IEEE DOI
2401
BibRef
Akiva, P.[Peri],
Huang, J.[Jing],
Liang, K.J.[Kevin J],
Kovvuri, R.[Rama],
Chen, X.Y.[Xing-Yu],
Feiszli, M.[Matt],
Dana, K.[Kristin],
Hassner, T.[Tal],
Self-Supervised Object Detection from Egocentric Videos,
ICCV23(5202-5214)
IEEE DOI
2401
BibRef
Scavo, R.[Rosario],
Ragusa, F.[Francesco],
Farinella, G.M.[Giovanni Maria],
Furnari, A.[Antonino],
Quasi-Online Detection of Take and Release Actions from Egocentric
Videos,
CIAP23(II:13-24).
Springer DOI Code:
WWW Link.
2312
BibRef
Mucha, W.[Wiktor],
Kampel, M.[Martin],
Hands, Objects, Action! Egocentric 2d Hand-based Action Recognition,
CVS23(31-40).
Springer DOI
2312
BibRef
Thakur, S.[Sanket],
Beyan, C.[Cigdem],
Morerio, P.[Pietro],
Murino, V.[Vittorio],
Bue, A.D.[Alessio Del],
Enhancing Next Active Object-Based Egocentric Action Anticipation
with Guided Attention,
ICIP23(1450-1454)
IEEE DOI
2312
BibRef
Ci, Y.Z.[Yuan-Zheng],
Wang, Y.Z.[Yi-Zhou],
Chen, M.[Meilin],
Tang, S.X.[Shi-Xiang],
Bai, L.[Lei],
Zhu, F.[Feng],
Zhao, R.[Rui],
Yu, F.W.[Feng-Wei],
Qi, D.L.[Dong-Lian],
Ouyang, W.L.[Wan-Li],
UniHCP: A Unified Model for Human-Centric Perceptions,
CVPR23(17840-17852)
IEEE DOI
2309
BibRef
Majumder, S.[Sagnik],
Jiang, H.[Hao],
Moulon, P.[Pierre],
Henderson, E.[Ethan],
Calamia, P.[Paul],
Grauman, K.[Kristen],
Ithapu, V.K.[Vamsi Krishna],
Chat2Map: Efficient Scene Mapping from Multi-Ego Conversations,
CVPR23(10554-10564)
IEEE DOI
2309
BibRef
Tang, S.X.[Shi-Xiang],
Chen, C.[Cheng],
Xie, Q.S.[Qing-Song],
Chen, M.[Meilin],
Wang, Y.Z.[Yi-Zhou],
Ci, Y.Z.[Yuan-Zheng],
Bai, L.[Lei],
Zhu, F.[Feng],
Yang, H.Y.[Hai-Yang],
Yi, L.[Li],
Zhao, R.[Rui],
Ouyang, W.L.[Wan-Li],
HumanBench: Towards General Human-Centric Perception with Projector
Assisted Pretraining,
CVPR23(21970-21982)
IEEE DOI
2309
BibRef
Ramakrishnan, S.K.[Santhosh Kumar],
Al-Halah, Z.[Ziad],
Grauman, K.[Kristen],
NaQ: Leveraging Narrations as Queries to Supervise Episodic Memory,
CVPR23(6694-6703)
IEEE DOI
2309
WWW Link.
BibRef
Gong, X.Y.[Xin-Yu],
Mohan, S.[Sreyas],
Dhingra, N.[Naina],
Bazin, J.C.[Jean-Charles],
Li, Y.L.[Yi-Lei],
Wang, Z.Y.[Zhang-Yang],
Ranjan, R.[Rakesh],
MMG-Ego4D: Multi-Modal Generalization in Egocentric Action
Recognition,
CVPR23(6481-6491)
IEEE DOI
2309
BibRef
Liu, S.M.[Shu-Ming],
Zhang, C.L.[Chen-Lin],
Zhao, C.[Chen],
Ghanem, B.[Bernard],
End-to-End Temporal Action Detection with 1B Parameters Across 1000
Frames,
CVPR24(18591-18601)
IEEE DOI Code:
WWW Link.
2410
Training, Adaptation models, Costs, Codes, Computational modeling,
Memory management, Video Understanding, End-to-End Training
BibRef
Ramazanova, M.[Merey],
Escorcia, V.[Victor],
Heilbron, F.C.[Fabian Caba],
Zhao, C.[Chen],
Ghanem, B.[Bernard],
OWL (Observe, Watch, Listen): Audiovisual Temporal Context for
Localizing Actions in Egocentric Videos,
L3D-IVU23(4880-4890)
IEEE DOI
2309
BibRef
Ohkawa, T.[Takehiko],
He, K.[Kun],
Sener, F.[Fadime],
Hodan, T.[Tomas],
Tran, L.[Luan],
Keskin, C.[Cem],
AssemblyHands:
Towards Egocentric Activity Understanding via 3D Hand Pose Estimation,
CVPR23(12999-13008)
IEEE DOI
2309
BibRef
Hao, Y.Z.[Yu-Zhe],
Uto, K.[Kuniaki],
Kanezaki, A.[Asako],
Sato, I.[Ikuro],
Kawakami, R.[Rei],
Shinoda, K.[Koichi],
EvIs-Kitchen: Egocentric Human Activities Recognition with Video and
Inertial Sensor Data,
MMMod23(I: 373-384).
Springer DOI
2304
BibRef
Mallol-Ragolta, A.[Adria],
Semertzidou, A.[Anastasia],
Pateraki, M.[Maria],
Schuller, B.[Björn],
harAGE: A Novel Multimodal Smartwatch-based Dataset for Human
Activity Recognition,
FG21(01-07)
IEEE DOI
2303
Accelerometers, Fuses, Computational modeling, Neural networks,
Network architecture, Activity recognition, Stairs
BibRef
Khatri, A.[Ajeeta],
Butler, Z.[Zachary],
Nwogu, I.[Ifeoma],
Analyzing Interactions in Paired Egocentric Videos,
FG23(1-7)
IEEE DOI
2303
Training, Heuristic algorithms,
Hidden Markov models, Oral communication, Feature extraction, Skin
BibRef
Mascaró, E.V.[Esteve Valls],
Ahn, H.[Hyemin],
Lee, D.[Dongheui],
Intention-Conditioned Long-Term Human Egocentric Action Anticipation,
WACV23(6037-6046)
IEEE DOI
2302
Uncertainty, Multitasking, Behavioral sciences, Data mining,
Task analysis, Forecasting,
Robotics
BibRef
Mallick, R.[Rupayan],
Benois-Pineau, J.[Jenny],
Zemmari, A.[Akka],
Yebda, T.[Thinhinane],
Pech, M.[Marion],
Amieva, H.[Hélène],
Middleton, L.[Laura],
Pooling Transformer for Detection of Risk Events in In-The-Wild Video
Ego Data,
ICPR22(2778-2784)
IEEE DOI
2212
Visualization, Taxonomy, Benchmark testing, Transformers, Birds,
Spatial databases
BibRef
Furnari, A.[Antonino],
Farinella, G.M.[Giovanni Maria],
Towards Streaming Egocentric Action Anticipation,
ICPR22(1250-1257)
IEEE DOI
2212
Analytical models, Runtime, Protocols,
Computational modeling, Streaming media, Predictive models
BibRef
Su, W.[Wei],
Liu, Y.H.[Yue-Hu],
Li, S.[Shasha],
Cai, Z.[Zerun],
Proprioception-Driven Wearer Pose Estimation for Egocentric Video,
ICPR22(3728-3735)
IEEE DOI
2212
Visualization, Pose estimation,
Pipelines, Reinforcement learning, Streaming media, Data processing
BibRef
Akada, H.[Hiroyasu],
Wang, J.[Jian],
Shimada, S.[Soshi],
Takahashi, M.[Masaki],
Theobalt, C.[Christian],
Golyanik, V.[Vladislav],
UnrealEgo: A New Dataset for Robust Egocentric 3D Human Motion Capture,
ECCV22(VI:1-17).
Springer DOI
2211
BibRef
Wong, B.[Benita],
Chen, J.[Joya],
Wu, Y.[You],
Lei, S.W.X.[Stan Wei-Xian],
Mao, D.X.[Dong-Xing],
Gao, D.F.[Di-Fei],
Shou, M.Z.[Mike Zheng],
AssistQ: Affordance-Centric Question-Driven Task Completion for
Egocentric Assistant,
ECCV22(XXXVI:485-501).
Springer DOI
2211
BibRef
Escorcia, V.[Victor],
Guerrero, R.[Ricardo],
Zhu, X.T.[Xia-Tian],
Martinez, B.[Brais],
SOS! Self-supervised Learning over Sets of Handled Objects in
Egocentric Action Recognition,
ECCV22(XIII:604-620).
Springer DOI
2211
BibRef
Liu, M.[Miao],
Ma, L.[Lingni],
Somasundaram, K.[Kiran],
Li, Y.[Yin],
Grauman, K.[Kristen],
Rehg, J.M.[James M.],
Li, C.[Chao],
Egocentric Activity Recognition and Localization on a 3D Map,
ECCV22(XIII:621-638).
Springer DOI
2211
BibRef
Bansal, S.[Siddhant],
Arora, C.[Chetan],
Jawahar, C.V.,
My View is the Best View: Procedure Learning from Egocentric Videos,
ECCV22(XIII:657-675).
Springer DOI
2211
BibRef
Hong, F.Z.[Fang-Zhou],
Pan, L.[Liang],
Cai, Z.G.[Zhon-Gang],
Liu, Z.W.[Zi-Wei],
Versatile Multi-Modal Pre-Training for Human-Centric Perception,
CVPR22(16135-16145)
IEEE DOI
2210
Representation learning, Graphics, Codes, Annotations, Semantics,
Estimation, Representation learning, RGBD sensors and analytics
BibRef
Grauman, K.[Kristen],
Westbury, A.[Andrew],
Byrne, E.[Eugene],
Chavis, Z.[Zachary],
Furnari, A.[Antonino],
Girdhar, R.[Rohit],
Hamburger, J.[Jackson],
Jiang, H.[Hao],
Liu, M.[Miao],
Liu, X.Y.[Xing-Yu],
Martin, M.[Miguel],
Nagarajan, T.[Tushar],
Radosavovic, I.[Ilija],
Ramakrishnan, S.K.[Santhosh Kumar],
Ryan, F.[Fiona],
Sharma, J.[Jayant],
Wray, M.[Michael],
Xu, M.M.[Meng-Meng],
Xu, E.Z.[Eric Zhongcong],
Zhao, C.[Chen],
Bansal, S.[Siddhant],
Batra, D.[Dhruv],
Cartillier, V.[Vincent],
Crane, S.[Sean],
Do, T.[Tien],
Doulaty, M.[Morrie],
Erapalli, A.[Akshay],
Feichtenhofer, C.[Christoph],
Fragomeni, A.[Adriano],
Fu, Q.[Qichen],
Gebreselasie, A.[Abrham],
González, C.[Cristina],
Hillis, J.[James],
Huang, X.[Xuhua],
Huang, Y.F.[Yi-Fei],
Jia, W.Q.[Wen-Qi],
Khoo, W.[Weslie],
Kolái, J.[Jáchym],
Kottur, S.[Satwik],
Kumar, A.[Anurag],
Landini, F.[Federico],
Li, C.[Chao],
Li, Y.H.[Yang-Hao],
Li, Z.Q.[Zhen-Qiang],
Mangalam, K.[Karttikeya],
Modhugu, R.[Raghava],
Munro, J.[Jonathan],
Murrell, T.[Tullie],
Nishiyasu, T.[Takumi],
Price, W.[Will],
Puentes, P.R.[Paola Ruiz],
Ramazanova, M.[Merey],
Sari, L.[Leda],
Somasundaram, K.[Kiran],
Southerland, A.[Audrey],
Sugano, Y.[Yusuke],
Tao, R.[Ruijie],
Vo, M.[Minh],
Wang, Y.C.[Yu-Chen],
Wu, X.[Xindi],
Yagi, T.[Takuma],
Zhao, Z.W.[Zi-Wei],
Zhu, Y.[Yunyi],
Arbeláez, P.[Pablo],
Crandall, D.[David],
Damen, D.[Dima],
Farinella, G.M.[Giovanni Maria],
Fuegen, C.[Christian],
Ghanem, B.[Bernard],
Ithapu, V.K.[Vamsi Krishna],
Jawahar, C.V.,
Joo, H.[Hanbyul],
Kitani, K.[Kris],
Li, H.Z.[Hai-Zhou],
Newcombe, R.[Richard],
Oliva, A.[Aude],
Park, H.S.[Hyun Soo],
Rehg, J.M.[James M.],
Sato, Y.[Yoichi],
Shi, J.B.[Jian-Bo],
Shou, M.Z.[Mike Zheng],
Torralba, A.[Antonio],
Torresani, L.[Lorenzo],
Yan, M.[Mingfei],
Malik, J.[Jitendra],
Ego4D: Around the World in 3,000 Hours of Egocentric Video,
CVPR22(18973-18990)
IEEE DOI
2210
Visualization, Technological innovation, Privacy,
Benchmark testing, Cameras, Solids, Datasets and evaluation,
Video analysis and understanding
BibRef
Plizzari, C.[Chiara],
Planamente, M.[Mirco],
Goletto, G.[Gabriele],
Cannici, M.[Marco],
Gusso, E.[Emanuele],
Matteucci, M.[Matteo],
Caputo, B.[Barbara],
E2(GO)MOTION: Motion Augmented Event Stream for Egocentric Action
Recognition,
CVPR22(19903-19915)
IEEE DOI
2210
Image motion analysis, Wearable computers, Memory management,
Vision sensors, Cameras, Robustness, Action and event recognition,
Datasets and evaluation
BibRef
Li, Y.M.[Yi-Ming],
Cao, Z.[Ziang],
Liang, A.[Andrew],
Liang, B.[Benjamin],
Chen, L.[Luoyao],
Zhao, H.[Hang],
Feng, C.[Chen],
Egocentric Prediction of Action Target in 3D,
CVPR22(20971-20980)
IEEE DOI
2210
Measurement, Recurrent neural networks,
Machine learning algorithms, Annotations, Wearable computers,
Vision applications and systems
BibRef
Liu, T.S.[Tian-Shan],
Lam, K.M.[Kin-Man],
A Hybrid Egocentric Activity Anticipation Framework via
Memory-Augmented Recurrent and One-shot Representation Forecasting,
CVPR22(13894-13903)
IEEE DOI
2210
Training, Semantics, Prototypes, Predictive models,
Object recognition,
Behavior analysis
BibRef
Do, T.[Tien],
Vuong, K.[Khiem],
Park, H.S.[Hyun Soo],
Egocentric Scene Understanding via Multimodal Spatial Rectifier,
CVPR22(2822-2831)
IEEE DOI
2210
Geometry, Visualization, Head, Rectifiers, Estimation,
Predictive models, Scene analysis and understanding, 3D from single images
BibRef
Cazenavette, G.[George],
Wang, T.Z.[Tong-Zhou],
Torralba, A.[Antonio],
Efros, A.A.[Alexei A.],
Zhu, J.Y.[Jun-Yan],
Wearable ImageNet: Synthesizing Tileable Textures via Dataset
Distillation,
CVFAD22(2277-2281)
IEEE DOI
2210
Printing, Wearable computers, Clothing, Training data
BibRef
Plananamente, M.[Mirco],
Plizzari, C.[Chiara],
Caputo, B.[Barbara],
Test-Time Adaptation for Egocentric Action Recognition,
CIAP22(III:206-218).
Springer DOI
2205
BibRef
Moreno-Rodríguez, F.J.[Francisco J.],
Traver, V.J.[V. Javier],
Barranco, F.[Francisco],
Dimiccoli, M.[Mariella],
Pla, F.[Filiberto],
Visual Event-Based Egocentric Human Action Recognition,
IbPRIA22(402-414).
Springer DOI
2205
BibRef
Wang, X.H.[Xiao-Han],
Zhu, L.C.[Lin-Chao],
Wang, H.[Heng],
Yang, Y.[Yi],
Interactive Prototype Learning for Egocentric Action Recognition,
ICCV21(8148-8157)
IEEE DOI
2203
Representation learning, Costs, Annotations, Affordances, Prototypes,
Video analysis and understanding, Action and behavior recognition
BibRef
Dittadi, A.[Andrea],
Dziadzio, S.[Sebastian],
Cosker, D.[Darren],
Lundell, B.[Ben],
Cashman, T.[Tom],
Shotton, J.[Jamie],
Full-Body Motion from a Single Head-Mounted Device:
Generating SMPL Poses from Partial Observations,
ICCV21(11667-11677)
IEEE DOI
2203
Legged locomotion, Wearable computers, Impedance matching,
Training data, Resists, Life estimation, Predictive models,
Motion and tracking
BibRef
Wang, J.[Jian],
Liu, L.J.[Ling-Jie],
Xu, W.P.[Wei-Peng],
Sarkar, K.[Kripasindhu],
Theobalt, C.[Christian],
Estimating Egocentric 3D Human Pose in Global Space,
ICCV21(11480-11489)
IEEE DOI
2203
Heating systems, Uncertainty, Pose estimation, Cameras, Sensors,
Gestures and body pose,
BibRef
Jiang, H.[Hao],
Ithapu, V.K.[Vamsi Krishna],
Egocentric Pose Estimation from Human Vision Span,
ICCV21(10986-10994)
IEEE DOI
2203
Deep learning, Head, Simultaneous localization and mapping, Shape,
Wearable computers, Streaming media, Gestures and body pose,
Vision applications and systems
BibRef
Planamente, M.[Mirco],
Plizzari, C.[Chiara],
Alberti, E.[Emanuele],
Caputo, B.[Barbara],
Domain Generalization through Audio-Visual Relative Norm Alignment in
First Person Action Recognition,
WACV22(163-174)
IEEE DOI
2202
Training, Visualization, Measurement units,
Limiting, Activity recognition, Cameras, Action and Behavior Recognition
BibRef
Osman, N.[Nada],
Camporese, G.[Guglielmo],
Coscia, P.[Pasquale],
Ballan, L.[Lamberto],
SlowFast Rolling-Unrolling LSTMs for Action Anticipation in
Egocentric Videos,
EPIC21(3430-3438)
IEEE DOI
2112
Measurement, Fuses, Computational modeling,
Feature extraction, Data mining
BibRef
Wen, Y.M.[Yang-Ming],
Singh, K.K.[Krishna Kumar],
Anderson, M.[Markham],
Jan, W.P.[Wei-Pang],
Lee, Y.J.[Yong Jae],
Seeing the Unseen: Predicting the First-Person Camera Wearer's
Location and Pose in Third-Person Scenes,
EPIC21(3439-3448)
IEEE DOI
2112
Network architecture, Cameras,
Task analysis
BibRef
Li, Y.H.[Yang-Hao],
Nagarajan, T.[Tushar],
Xiong, B.[Bo],
Grauman, K.[Kristen],
Ego-Exo: Transferring Visual Representations from Third-person to
First-person Videos,
CVPR21(6939-6949)
IEEE DOI
2111
Training, Visualization, Computational modeling,
Pipelines, Activity recognition, Data models
BibRef
Guzov, V.[Vladimir],
Mir, A.[Aymen],
Sattler, T.[Torsten],
Pons-Moll, G.[Gerard],
Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors,
CVPR21(4316-4327)
IEEE DOI
2111
Visualization, Navigation,
Pose estimation, Cameras, Sensor systems
BibRef
Zatsarynna, O.[Olga],
Gall, J.[Juergen],
Action Anticipation with Goal Consistency,
ICIP23(1630-1634)
IEEE DOI Code:
WWW Link.
2312
BibRef
Zatsarynna, O.[Olga],
Farha, Y.A.[Yazan Abu],
Gall, J.[Juergen],
Multi-Modal Temporal Convolutional Network for Anticipating Actions
in Egocentric Videos,
Precognition21(2249-2258)
IEEE DOI
2109
Autonomous automobiles, Reliability, Intelligent agents
BibRef
Min, K.[Kyle],
Corso, J.J.[Jason J.],
Integrating Human Gaze into Attention for Egocentric Activity
Recognition,
WACV21(1068-1077)
IEEE DOI
2106
Training, Visualization, Uncertainty,
Activity recognition, Probabilistic logic, Spatiotemporal phenomena
BibRef
Zhang, L.M.[Li-Ming],
Zhang, W.B.[Wen-Bin],
Japkowicz, N.[Nathalie],
Conditional-UNet: A Condition-aware Deep Model for Coherent Human
Activity Recognition From Wearables,
ICPR21(5889-5896)
IEEE DOI
2105
Legged locomotion, Deep learning, Support vector machines,
Wearable computers, Time series analysis, Gesture recognition,
Activity recognition
BibRef
Kao, P.Y.[Peng Yua],
Lei, Y.J.[Yan-Jing],
Chang, C.H.[Chia-Hao],
Chen, C.S.[Chu-Song],
Lee, M.S.[Ming-Sui],
Hung, Y.P.[Yi-Ping],
Activity Recognition Using First-Person-View Cameras Based on Sparse
Optical Flows,
ICPR21(81-86)
IEEE DOI
2105
Activity recognition, Cameras,
Convolutional neural networks, Optical flow, Videos, Sports
BibRef
Planamente, M.[Mirco],
Bottino, A.[Andrea],
Caputo, B.[Barbara],
Self-Supervised Joint Encoding of Motion and Appearance for First
Person Action Recognition,
ICPR21(8751-8758)
IEEE DOI
2105
Image segmentation, Motion segmentation, Focusing, Streaming media,
Feature extraction, Planning,
Self-supervised Learning
BibRef
Sharghi, A.[Aidean],
da Vitoria Lobo, N.[Niels],
Shah, M.[Mubarak],
Text Synopsis Generation for Egocentric Videos,
ICPR21(4252-4259)
IEEE DOI
2105
Measurement, Visualization, Databases, Cameras,
Natural language processing, Object recognition
BibRef
Matei, O.[Oliviu],
Erdei, R.[Rudolf],
Moga, A.[Alexandru],
Heb, R.[Robert],
A Serverless Architecture for a Wearable Face Recognition Application,
RISS20(642-655).
Springer DOI
2103
BibRef
Ide, Y.[Yuta],
Araki, T.[Tsuyohito],
Hamada, R.[Ryunosuke],
Ohno, K.[Kazunori],
Yanai, K.[Keiji],
Rescue Dog Action Recognition by Integrating Ego-centric Video, Sound
and Sensor Information,
EgoApp20(321-333).
Springer DOI
2103
BibRef
Suveges, T.[Tamas],
McKenna, S.[Stephen],
Cam-softmax for discriminative deep feature learning,
ICPR21(5996-6002)
IEEE DOI
2105
Training, Deep learning, Face recognition, Training data, Robustness,
Labeling, Convolutional neural networks
BibRef
Suveges, T.[Tamas],
McKenna, S.[Stephen],
Egomap: Hierarchical First-person Semantic Mapping,
EgoApp20(348-363).
Springer DOI
2103
BibRef
Thapar, D.[Daksh],
Arora, C.[Chetan],
Nigam, A.[Aditya],
Is Sharing of Egocentric Video Giving Away Your Biometric Signature?,
ECCV20(XVII:399-416).
Springer DOI
2011
BibRef
Bhandari, K.,
DeLaGarza, M.A.,
Zong, Z.,
Latapie, H.,
Yan, Y.,
Egok360: A 360 Egocentric Kinetic Human Activity Video Dataset,
ICIP20(266-270)
IEEE DOI
2011
Convolution, Activity recognition, Cameras,
Kinetic theory, Optical distortion,
Two-stream Network
BibRef
Makansi, O.,
Çiçek, Ö.,
Buchicchio, K.,
Brox, T.,
Multimodal Future Localization and Emergence Prediction for Objects
in Egocentric View With a Reachability Prior,
CVPR20(4353-4362)
IEEE DOI
2008
Semantics, Task analysis, Automobiles, Trajectory, Sensors, Heating systems
BibRef
Nagarajan, T.[Tushar],
Li, Y.H.[Yang-Hao],
Feichtenhofer, C.[Christoph],
Grauman, K.[Kristen],
Ego-Topo: Environment Affordances From Egocentric Video,
CVPR20(160-169)
IEEE DOI
2008
Visualization, Cameras, Simultaneous localization and mapping,
Task analysis, Animals
BibRef
Ng, E.,
Xiang, D.,
Joo, H.,
Grauman, K.,
You2Me: Inferring Body Pose in Egocentric Video via First and Second
Person Interactions,
CVPR20(9887-9897)
IEEE DOI
2008
Cameras, Feature extraction,
Robot vision systems, Pose estimation, Visualization, Predictive models
BibRef
Kazakos, E.,
Nagrani, A.,
Zisserman, A.,
Damen, D.,
EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action
Recognition,
ICCV19(5491-5500)
IEEE DOI
2004
image fusion, image representation, object recognition,
pose estimation, egocentric action recognition
BibRef
Rotondo, T.[Tiziana],
Farinella, G.M.[Giovanni Maria],
Giacalone, D.[Davide],
Strano, S.M.[Sebastiano Mauro],
Tomaselli, V.[Valeria],
Battiato, S.[Sebastiano],
Anticipating Activity from Multimodal Signals,
ICPR21(4680-4687)
IEEE DOI
2105
Deep learning, Benchmark testing, Angular velocity,
Magnetic fields, Acceleration, Task analysis
BibRef
Furnari, A.,
Farinella, G.M.[Giovanni Maria],
What Would You Expect? Anticipating Egocentric Actions With
Rolling-Unrolling LSTMs and Modality Attention,
ICCV19(6251-6260)
IEEE DOI
2004
Code, Egocentric Actions.
WWW Link. cameras, feature extraction, image classification,
image colour analysis, image motion analysis,
Computational modeling
BibRef
Tome, D.,
Peluse, P.,
Agapito, L.,
Badino, H.,
xR-EgoPose: Egocentric 3D Human Pose From an HMD Camera,
ICCV19(7727-7737)
IEEE DOI
2004
cameras, helmet mounted displays, image capture,
image motion analysis, image sensors, pose estimation,
Uncertainty
BibRef
Jang, Y.,
Sullivan, B.,
Ludwig, C.,
Gilchrist, I.,
Damen, D.,
Mayol-Cuevas, W.,
EPIC-Tent: An Egocentric Video Dataset for Camping Tent Assembly,
EPIC19(4461-4469)
IEEE DOI
2004
cameras, image motion analysis, video signal processing,
egocentric video dataset, tent assembly, outdoor video dataset,
action detection
BibRef
Cartas, A.,
Luque, J.,
Radeva, P.,
Segura, C.,
Dimiccoli, M.,
Seeing and Hearing Egocentric Actions: How Much Can We Learn?,
EPIC19(4470-4480)
IEEE DOI
2004
audio-visual systems, human computer interaction,
pattern classification, human-to-object interactions,
audio classification
BibRef
Nakazawa, A.,
Honda, M.,
First-Person Camera System to Evaluate Tender Dementia-Care Skill,
EPIC19(4435-4442)
IEEE DOI
2004
biomedical education, computer aided instruction,
convolutional neural nets, educational aids, face recognition, humanitude
BibRef
Jiang, H.Y.[Hai-Yu],
Song, Y.[Yan],
He, J.[Jiang],
Shu, X.B.[Xiang-Bo],
Cross Fusion for Egocentric Interactive Action Recognition,
MMMod20(I:714-726).
Springer DOI
2003
BibRef
Sudhakaran, S.[Swathikiran],
Escalera, S.[Sergio],
Lanz, O.[Oswald],
LSTA: Long Short-Term Attention for Egocentric Action Recognition,
CVPR19(9946-9955).
IEEE DOI
2002
BibRef
Chen, L.,
Nakamura, Y.,
Kondo, K.,
Damen, D.,
Mayol-Cuevas, W.W.,
Hotspots Integrating of Expert and Beginner Experiences of Machine
Operations through Egocentric Vision,
MVA19(1-6)
DOI Link
1911
human computer interaction,
mechanical engineering computing, sewing machines,
experts egocentric vision records
BibRef
Sanchez-Matilla, R.,
Cavallaro, A.,
A Predictor of Moving Objects for First-Person Vision,
ICIP19(2189-2193)
IEEE DOI
1910
Motion model, Prediction model, Moving cameras, First-person vision
BibRef
Lu, Y.,
Li, Y.,
Velipasalar, S.,
Efficient Human Activity Classification from Egocentric Videos
Incorporating Actor-Critic Reinforcement Learning,
ICIP19(564-568)
IEEE DOI
1910
activity classification, reinforcement learning, actor critic
BibRef
Furnari, A.,
Farinella, G.M.,
Egocentric Action Anticipation by Disentangling Encoding and
Inference,
ICIP19(3357-3361)
IEEE DOI
1910
EPIC-KITCHENS, First Person Vision, Egocentric Vision,
Action Anticipation, LSTM
BibRef
Nigam, J.[Jyoti],
Rameshan, R.M.[Renu M.],
TRINet: Tracking and Re-identification Network for Multiple Targets in
Egocentric Videos Using LSTMs,
CAIP19(II:438-448).
Springer DOI
1909
BibRef
Furnari, A.[Antonino],
Battiato, S.[Sebastiano],
Farinella, G.M.[Giovanni Maria],
Leveraging Uncertainty to Rethink Loss Functions and Evaluation
Measures for Egocentric Action Anticipation,
Egocentric18(V:389-405).
Springer DOI
1905
BibRef
Patra, S.,
Gupta, K.,
Ahmad, F.,
Arora, C.,
Banerjee, S.,
EGO-SLAM: A Robust Monocular SLAM for Egocentric Videos,
WACV19(31-40)
IEEE DOI
1904
pose estimation, robot vision, SLAM (robots),
video signal processing, EGO-SLAM, robust monocular SLAM,
Tracking
BibRef
Henriques, J.F.[Joao F.],
Vedaldi, A.[Andrea],
MapNet: An Allocentric Spatial Memory for Mapping Environments,
CVPR18(8476-8484)
IEEE DOI
1812
Simultaneous localization and mapping, Cameras, Navigation,
Streaming media, Task analysis, Geometry
BibRef
Sigurdsson, G.A.[Gunnar A.],
Gupta, A.[Abhinav],
Schmid, C.[Cordelia],
Farhadi, A.[Ali],
Alahari, K.[Karteek],
Actor and Observer: Joint Modeling of First and Third-Person Videos,
CVPR18(7396-7404)
IEEE DOI
1812
Videos, Cameras, Portable computers, Data models, Task analysis,
Computational modeling
BibRef
Possas, R.,
Caceres, S.P.,
Ramos, F.,
Egocentric Activity Recognition on a Budget,
CVPR18(5967-5976)
IEEE DOI
1812
Activity recognition, Ear, Videos, Cameras, Task analysis, Optical imaging
BibRef
Chuang, C.Y.[Ching-Yao],
Li, J.M.[Jia-Man],
Torralba, A.B.[Antonio B.],
Fidler, S.[Sanja],
Learning to Act Properly:
Predicting and Explaining Affordances from Images,
CVPR18(975-983)
IEEE DOI
1812
Cognition, Visualization, Neural networks, Knowledge based systems,
Task analysis, Data collection, Robots
BibRef
Spera, E.,
Furnari, A.,
Battiato, S.,
Farinella, G.M.,
Egocentric Shopping Cart Localization,
ICPR18(2277-2282)
IEEE DOI
1812
image retrieval, learning (artificial intelligence),
object detection, regression analysis, Solid modeling
BibRef
Shen, Y.[Yang],
Ni, B.B.[Bing-Bing],
Li, Z.[Zefan],
Zhuang, N.[Ning],
Egocentric Activity Prediction via Event Modulated Attention,
ECCV18(II: 202-217).
Springer DOI
1810
BibRef
Ho, H.I.[Hsuan-I],
Chiu, W.C.[Wei-Chen],
Wang, Y.C.A.F.[Yu-Chi-Ang Frank],
Summarizing First-Person Videos from Third Persons' Points of Views,
ECCV18(XV: 72-89).
Springer DOI
1810
BibRef
Verma, S.,
Nagar, P.,
Gupta, D.,
Arora, C.,
Making Third Person Techniques Recognize First-Person Actions in
Egocentric Videos,
ICIP18(2301-2305)
IEEE DOI
1809
Videos, Cameras, Streaming media, Optical imaging, Training, Head,
Adaptation models, Egocentric Videos,
Deep Learning
BibRef
Nguyen, T.H.C.[Thi-Hoa-Cuc],
Nebel, J.C.[Jean-Christophe],
Florez-Revuelta, F.[Francisco],
Recognition of Activities of Daily Living from Egocentric Videos Using
Hands Detected by a Deep Convolutional Network,
ICIAR18(390-398).
Springer DOI
1807
BibRef
Aghaei, M.[Maedeh],
Dimiccoli, M.[Mariella],
Radeva, P.I.[Petia I.],
Multi-face tracking by extended bag-of-tracklets in egocentric
photo-streams,
CVIU(149), No. 1, 2016, pp. 146-156.
Elsevier DOI
1606
Egocentric vision
BibRef
Aghaei, M.[Maedeh],
Dimiccoli, M.[Mariella],
Ferrer, C.C.[Cristian Canton],
Radeva, P.[Petia],
Towards social pattern characterization in egocentric photo-streams,
CVIU(171), 2018, pp. 104-117.
Elsevier DOI
1812
BibRef
Earlier: A1, A2, A4, Only:
All the people around me: Face discovery in egocentric photo-streams,
ICIP17(1342-1346)
IEEE DOI
1803
Social pattern characterization, Social signal extraction,
Lifelogging, Convolutional and recurrent neural networks.
Cameras, Detectors, Face, Prototypes, Task analysis,
Tracking, bag-of-tracklets, deep-matching,
face discovery
BibRef
Tang, Y.,
Tian, Y.,
Lu, J.,
Feng, J.,
Zhou, J.,
Action recognition in RGB-D egocentric videos,
ICIP17(3410-3414)
IEEE DOI
1803
Cameras, Optical imaging, Optical sensors, Streaming media,
Training, Videos, Action recognition,
egocentric videos
BibRef
Ma, K.T.,
Li, L.,
Dai, P.,
Lim, J.H.,
Shen, C.,
Zhao, Q.,
Multi-layer linear model for top-down modulation of visual attention
in natural egocentric vision,
ICIP17(3470-3474)
IEEE DOI
1803
Adaptation models, Computational modeling, History, Machine vision,
Task analysis, Training, Visualization, ego-centric, real-world, visual attention
BibRef
Wang, L.[Lin],
Song, Y.[Yan],
Yan, R.[Rui],
Shu, X.B.[Xiang-Bo],
Spatiotemporal Perturbation Based Dynamic Consistency for
Semi-Supervised Temporal Action Detection,
MMMod22(I:178-190).
Springer DOI
2203
BibRef
Fa, L.L.[Ling-Ling],
Song, Y.[Yan],
Shu, X.B.[Xiang-Bo],
Global and Local C3D Ensemble System for First Person Interactive
Action Recognition,
MMMod18(II:153-164).
Springer DOI
1802
BibRef
Liu, Y.,
Wei, P.,
Zhu, S.C.,
Jointly Recognizing Object Fluents and Tasks in Egocentric Videos,
ICCV17(2943-2951)
IEEE DOI
1802
image motion analysis, image sequences, object recognition,
video signal processing, beam search algorithm,
Videos
BibRef
Yu, C.,
Bambach, S.,
Zhang, Z.,
Crandall, D.J.,
Exploring Inter-Observer Differences in First-Person Object Views
Using Deep Learning Models,
CogCV17(2773-2782)
IEEE DOI
1802
Cameras, Data models, Toy manufacturing industry,
Training, Videos, Visualization
BibRef
Abebe, G.,
Cavallaro, A.,
A Long Short-Term Memory Convolutional Neural Network for
First-Person Vision Activity Recognition,
ACVR17(1339-1346)
IEEE DOI
1802
Activity recognition, Convolutional codes, Encoding,
Feature extraction, Manganese, Spectrogram
BibRef
Furnari, A.,
Battiato, S.,
Farinella, G.M.,
How Shall We Evaluate Egocentric Action Recognition?,
Egocentric17(2373-2382)
IEEE DOI
1802
Current measurement,
Prediction algorithms, Supervised learning, Videos
BibRef
Huang, Y.,
Cai, M.,
Kera, H.,
Yonetani, R.,
Higuchi, K.,
Sato, Y.,
Temporal Localization and Spatial Segmentation of Joint Attention in
Multiple First-Person Videos,
Egocentric17(2313-2321)
IEEE DOI
1802
Cameras, Data models, Noise measurement, Proposals,
Spatiotemporal phenomena, Videos, Visualization
BibRef
Penna, A.,
Mohammadi, S.,
Jojic, N.,
Murino, V.,
Summarization and Classification of Wearable Camera Streams by
Learning the Distributions over Deep Features of Out-of-Sample Image
Sequences,
ICCV17(4336-4344)
IEEE DOI
1802
Markov processes, feature extraction, feedforward neural nets,
image classification, image sequences,
Visualization
BibRef
Battiato, S.[Sebastiano],
Farinella, G.M.[Giovanni Maria],
Napoli, C.[Christian],
Nicotra, G.[Gabriele],
Riccobene, S.[Salvatore],
Recognizing Context for Privacy Preserving of First Person Vision Image
Sequences,
CIAP17(II:580-590).
Springer DOI
1711
BibRef
Zaki, H.F.M.,
Shafait, F.,
Mian, A.,
Modeling Sub-Event Dynamics in First-Person Action Recognition,
CVPR17(1619-1628)
IEEE DOI
1711
Cameras, Dynamics, Feature extraction, Neurons, Observers,
Time series analysis, Videos
BibRef
Nakamura, K.,
Yeung, S.,
Alahi, A.,
Fei-Fei, L.[Li],
Jointly Learning Energy Expenditures and Activities Using Egocentric
Multimodal Signals,
CVPR17(6817-6826)
IEEE DOI
1711
Acceleration, Accelerometers, Activity recognition, Cameras,
Heart rate, Wearable, sensors
BibRef
Jiang, H.,
Grauman, K.,
Seeing Invisible Poses: Estimating 3D Body Pose from Egocentric Video,
CVPR17(3501-3509)
IEEE DOI
1711
Biomedical monitoring, Cameras, Indexes, Legged locomotion,
Pose estimation, Visualization
BibRef
Lee, J.W.[Jang-Won],
Ryoo, M.S.[Michael S.],
Learning Robot Activities from First-Person Human Videos Using
Convolutional Future Regression,
DeepLearnRV17(472-473)
IEEE DOI
1709
Detectors, Motor drives, Robot control, Robot kinematics, Training, Videos
BibRef
Nigam, J.,
Rameshan, R.M.,
EgoTracker: Pedestrian Tracking with Re-identification in Egocentric
Videos,
Odometry17(980-987)
IEEE DOI
1709
Cameras, Optical devices, Optical imaging, Support vector machines,
Target tracking, Videos
BibRef
Jain, S.[Samriddhi],
Rameshan, R.M.[Renu M.],
Nigam, A.[Aditya],
Object Triggered Egocentric Video Summarization,
CAIP17(II: 428-439).
Springer DOI
1708
BibRef
Chen, L.F.[Long-Fei],
Kondo, K.[Kazuaki],
Nakamura, Y.[Yuichi],
Damen, D.[Dima],
Mayol-Cuevas, W.W.[Walterio W.],
Hotspots detection for machine operation in egocentric vision,
MVA17(223-226)
DOI Link
1708
Cameras, Feature extraction, Magnetic heads, Manuals, Organizations,
Shape, Visualization. Where people touch the machines - buttons, etc.
BibRef
Shi, J.,
Connecting the dots:
Embodied visual perception from first-person cameras,
MVA17(231-233)
DOI Link
1708
Cameras, Gravity, Semantics, Trajectory, Visualization
BibRef
Li, Y.[Yin],
Ye, Z.[Zhefan],
Rehg, J.M.[James M.],
Delving into egocentric actions,
CVPR15(287-295)
IEEE DOI
1510
BibRef
Cartas, A.[Alejandro],
Talavera, E.[Estefania],
Radeva, P.[Petia],
Dimiccoli, M.[Mariella],
Understanding Event Boundaries for Egocentric Activity Recognition from
Photo-streams,
EgoApp20(334-347).
Springer DOI
2103
BibRef
Earlier: A1, A4, A3, Only:
Batch-Based Activity Recognition from Egocentric Photo-Streams,
Egocentric17(2347-2354)
IEEE DOI
1802
Activity recognition, Cameras,
Legged locomotion, Streaming media, Training
BibRef
Cartas, A.[Alejandro],
Marín, J.[Juan],
Radeva, P.[Petia],
Dimiccoli, M.[Mariella],
Recognizing Activities of Daily Living from Egocentric Images,
IbPRIA17(87-95).
Springer DOI
1706
BibRef
Bokhari, S.Z.[Syed Zahir],
Kitani, K.M.[Kris M.],
Long-Term Activity Forecasting Using First-Person Vision,
ACCV16(V: 346-360).
Springer DOI
1704
BibRef
Duane, A.[Aaron],
Zhou, J.[Jiang],
Little, S.[Suzanne],
Gurrin, C.[Cathal],
Smeaton, A.F.[Alan F.],
An Annotation System for Egocentric Image Media,
MMMod17(II: 442-445).
Springer DOI
1701
BibRef
Mathews, S.M.[Sherin M.],
Kambhamettu, C.[Chandra],
Barner, K.E.[Kenneth E.],
Maximum Correntropy Based Dictionary Learning Framework for Physical
Activity Recognition Using Wearable Sensors,
ISVC16(II: 123-132).
Springer DOI
1701
BibRef
Zhou, Y.[Yang],
Ni, B.B.[Bing-Bing],
Hong, R.C.[Ri-Chang],
Yang, X.K.[Xiao-Kang],
Tian, Q.[Qi],
Cascaded Interactional Targeting Network for Egocentric Video
Analysis,
CVPR16(1904-1913)
IEEE DOI
1612
BibRef
Zhou, Y.[Yang],
Ni, B.B.[Bing-Bing],
Hong, R.C.[Ri-Chang],
Wang, M.[Meng],
Tian, Q.[Qi],
Interaction part mining:
A mid-level approach for fine-grained action recognition,
CVPR15(3323-3331)
IEEE DOI
1510
BibRef
Wray, M.[Michael],
Moltisanti, D.[Davide],
Mayol-Cuevas, W.W.[Walterio W.],
Damen, D.[Dima],
SEMBED: Semantic Embedding of Egocentric Action Videos,
Egocentric16(I: 532-545).
Springer DOI
1611
BibRef
Al Safadi, E.,
Mohammad, F.,
Iyer, D.,
Smiley, B.J.,
Jain, N.K.,
Generalized activity recognition using accelerometer in wearable
devices for IoT applications,
AVSS16(73-79)
IEEE DOI
1611
Accelerometers
BibRef
Singh, K.K.,
Fatahalian, K.,
Efros, A.A.,
KrishnaCam: Using a longitudinal, single-person, egocentric dataset
for scene understanding tasks,
WACV16(1-9)
IEEE DOI
1606
Cameras
BibRef
Lin, Y.W.[Yue-Wei],
Abdelfatah, K.[Kareem],
Zhou, Y.J.[You-Jie],
Fan, X.C.[Xiao-Chuan],
Yu, H.K.[Hong-Kai],
Qian, H.[Hui],
Wang, S.[Song],
Co-Interest Person Detection from Multiple Wearable Camera Videos,
ICCV15(4426-4434)
IEEE DOI
1602
BibRef
Xiong, B.,
Kim, G.,
Sigal, L.,
Storyline Representation of Egocentric Videos with an Applications to
Story-Based Search,
ICCV15(4525-4533)
IEEE DOI
1602
Search problems; Semantics; TV; Training; Videos; Visualization; YouTube
BibRef
Zhou, Y.,
Berg, T.L.,
Temporal Perception and Prediction in Ego-Centric Video,
ICCV15(4498-4506)
IEEE DOI
1602
Computational modeling
BibRef
Grauman, K.[Kristen],
Action and Attention in First-person Vision,
BMVC15(xx-yy).
DOI Link
1601
BibRef
Misawa, H.[Hiroki],
Obara, T.[Takashi],
Iyatomi, H.[Hitoshi],
Automated Habit Detection System: A Feasibility Study,
ISVC15(II: 16-23).
Springer DOI
1601
BibRef
Betancourt, A.[Alejandro],
Morerio, P.[Pietro],
Marcenaro, L.[Lucio],
Rauterberg, M.[Matthias],
Regazzoni, C.S.[Carlo S.],
Filtering SVM frame-by-frame binary classification in a detection
framework,
ICIP15(2552-2556)
IEEE DOI
1512
Bayesian Filtering. to deal with image changes.
BibRef
Vaca-Castano, G.[Gonzalo],
Das, S.[Samarjit],
Sousa, J.P.[Joao P.],
Improving egocentric vision of daily activities,
ICIP15(2562-2566)
IEEE DOI
1512
Activities of Daily Living
BibRef
Debarba, H.G.,
Molla, E.,
Herbelin, B.,
Boulic, R.,
Characterizing embodied interaction in First and Third Person
Perspective viewpoints,
3DUI15(67-72)
IEEE DOI
1511
human computer interaction
BibRef
Ryoo, M.S.,
Rothrock, B.[Brandon],
Matthies, L.H.[Larry H.],
Pooled motion features for first-person videos,
CVPR15(896-904)
IEEE DOI
1510
BibRef
Poleg, Y.[Yair],
Ephrat, A.,
Peleg, S.[Shmuel],
Arora, C.[Chetan],
Compact CNN for indexing egocentric videos,
WACV16(1-9)
IEEE DOI
1606
Cameras
BibRef
Poleg, Y.[Yair],
Halperin, T.[Tavi],
Arora, C.[Chetan],
Peleg, S.[Shmuel],
EgoSampling: Fast-forward and stereo for egocentric videos,
CVPR15(4768-4776)
IEEE DOI
1510
BibRef
Rogez, G.[Gregory],
Supancic, J.S.[James S.],
Ramanan, D.[Deva],
First-person pose recognition using egocentric workspaces,
CVPR15(4325-4333)
IEEE DOI
1510
BibRef
Bolaños, M.[Marc],
Garolera, M.[Maite],
Radeva, P.[Petia],
Object Discovery Using CNN Features in Egocentric Videos,
IbPRIA15(67-74).
Springer DOI
1506
BibRef
Oliveira-Barra, G.[Gabriel],
Dimiccoli, M.[Mariella],
Radeva, P.[Petia],
Leveraging Activity Indexing for Egocentric Image Retrieval,
IbPRIA17(295-303).
Springer DOI
1706
BibRef
Talavera, E.[Estefania],
Dimiccoli, M.[Mariella],
Bolaños, M.[Marc],
Aghaei, M.[Maedeh],
Radeva, P.[Petia],
R-Clustering for Egocentric Video Segmentation,
IbPRIA15(327-336).
Springer DOI
1506
BibRef
Soran, B.[Bilge],
Farhadi, A.[Ali],
Shapiro, L.G.[Linda G.],
Generating Notifications for Missing Actions:
Don't Forget to Turn the Lights Off!,
ICCV15(4669-4677)
IEEE DOI
1602
BibRef
Earlier:
Action Recognition in the Presence of One Egocentric and Multiple
Static Cameras,
ACCV14(V: 178-193).
Springer DOI
1504
Cameras
BibRef
Song, S.[Sibo],
Chandrasekhar, V.[Vijay],
Mandal, B.,
Li, L.Y.[Li-Yuan],
Lim, J.H.[Joo-Hwee],
Babu, G.S.,
San, P.P.,
Cheung, N.M.[Ngai-Man],
Multimodal Multi-Stream Deep Learning for Egocentric Activity
Recognition,
Egocentric-C16(378-385)
IEEE DOI
1612
BibRef
Rosenhahn, B.[Bodo],
Multi-sensor Acceleration-Based Action Recognition,
ICIAR14(II: 48-57).
Springer DOI
1410
BibRef
Song, S.[Sibo],
Chandrasekhar, V.[Vijay],
Cheung, N.M.[Ngai-Man],
Narayan, S.[Sanath],
Li, L.Y.[Li-Yuan],
Lim, J.H.[Joo-Hwee],
Activity Recognition in Egocentric Life-Logging Videos,
IMEV14(445-458).
Springer DOI
1504
BibRef
Zheng, K.[Kang],
Lin, Y.W.[Yue-Wei],
Zhou, Y.J.[You-Jie],
Salvi, D.[Dhaval],
Fan, X.C.[Xiao-Chuan],
Guo, D.Z.[Da-Zhou],
Meng, Z.[Zibo],
Wang, S.[Song],
Video-Based Action Detection Using Multiple Wearable Cameras,
ChaLearn14(727-741).
Springer DOI
1504
BibRef
Lin, Y.Z.[Yi-Zhou],
Hua, G.[Gang],
Mordohai, P.[Philippos],
Egocentric Object Recognition Leveraging the 3D Shape of the Grasping
Hand,
ACVR14(746-762).
Springer DOI
1504
BibRef
Yan, Y.[Yan],
Ricci, E.[Elisa],
Rostamzadeh, N.[Negar],
Sebe, N.[Nicu],
It's all about habits: Exploiting multi-task clustering for
activities of daily living analysis,
ICIP14(1071-1075)
IEEE DOI
1502
Algorithm design and analysis
BibRef
Moghimi, M.[Mohammad],
Wu, W.[Wanmin],
Chen, J.[Jacqueline],
Godbole, S.[Suneeta],
Marshall, S.[Simon],
Kerr, J.[Jacqueline],
Belongie, S.J.[Serge J.],
Analyzing sedentary behavior in life-logging images,
ICIP14(1011-1015)
IEEE DOI
1502
Accuracy
BibRef
Zhou, J.[Jiang],
Duane, A.[Aaron],
Albatal, R.[Rami],
Gurrin, C.[Cathal],
Johansen, D.[Dag],
Wearable Cameras for Real-Time Activity Annotation,
MMMod15(II: 319-322).
Springer DOI
1501
Personal (Big) Data Modeling for Information Access & Retrieval
BibRef
Wang, P.[Peng],
Smeaton, A.F.[Alan F.],
Gurrin, C.[Cathal],
Factorizing Time-Aware Multi-way Tensors for Enhancing Semantic
Wearable Sensing,
MMMod15(I: 571-582).
Springer DOI
1501
BibRef
Frinken, V.[Volkmar],
Iwakiri, Y.[Yutaro],
Ishida, R.[Ryosuke],
Fujisaki, K.[Kensho],
Uchida, S.[Seiichi],
Improving Point of View Scene Recognition by Considering Textual Data,
ICPR14(2966-2971)
IEEE DOI
1412
Cameras
BibRef
Xia, L.[Lu],
Gori, I.[Ilaria],
Aggarwal, J.K.,
Ryoo, M.S.,
Robot-centric Activity Recognition from First-Person RGB-D Videos,
WACV15(357-364)
IEEE DOI
1503
Cameras
BibRef
Iwashita, Y.[Yumi],
Takamine, A.[Asamichi],
Kurazume, R.[Ryo],
Ryoo, M.S.,
First-Person Animal Activity Recognition from Egocentric Videos,
ICPR14(4310-4315)
IEEE DOI
1412
Animals
BibRef
Matsuo, K.[Kenji],
Yamada, K.[Kentaro],
Ueno, S.[Satoshi],
Naito, S.[Sei],
An Attention-Based Activity Recognition for Egocentric Video,
Egocentric14(565-570)
IEEE DOI
1409
BibRef
Alletto, S.[Stefano],
Serra, G.[Giuseppe],
Calderara, S.[Simone],
Cucchiara, R.[Rita],
Head Pose Estimation in First-Person Camera Views,
ICPR14(4188-4193)
IEEE DOI
1412
Cameras
BibRef
Alletto, S.[Stefano],
Serra, G.[Giuseppe],
Calderara, S.[Simone],
Solera, F.[Francesco],
Cucchiara, R.[Rita],
From Ego to Nos-Vision:
Detecting Social Relationships in First-Person Views,
Egocentric14(594-599)
IEEE DOI
1409
Ego-vision;f-formation;social interaction
BibRef
Moghimi, M.[Mohammad],
Azagra, P.[Pablo],
Montesano, L.[Luis],
Murillo, A.C.[Ana C.],
Belongie, S.J.[Serge J.],
Experiments on an RGB-D Wearable Vision System for Egocentric
Activity Recognition,
Egocentric14(611-617)
IEEE DOI
1409
BibRef
Narayan, S.[Sanath],
Ramakrishnan, K.R.[Kalpathi R.],
A Cause and Effect Analysis of Motion Trajectories for Modeling
Actions,
CVPR14(2633-2640)
IEEE DOI
1409
Action Recognition; Granger Causality; Interaction
BibRef
Narayan, S.[Sanath],
Kankanhalli, M.S.[Mohan S.],
Ramakrishnan, K.R.[Kalpathi R.],
Action and Interaction Recognition in First-Person Videos,
Egocentric14(526-532)
IEEE DOI
1409
First-person video;Interaction recognition
BibRef
Tan, C.[Cheston],
Goh, H.L.[Han-Lin],
Chandrasekhar, V.[Vijay],
Li, L.Y.[Li-Yuan],
Lim, J.H.[Joo-Hwee],
Understanding the Nature of First-Person Videos:
Characterization and Classification Using Low-Level Features,
Egocentric14(549-556)
IEEE DOI
1409
BibRef
McCandless, T.[Tomas],
Grauman, K.[Kristen],
Object-Centric Spatio-Temporal Pyramids for Egocentric Activity
Recognition,
BMVC13(xx-yy).
DOI Link
1402
BibRef
Fathi, A.[Alireza],
Rehg, J.M.[James M.],
Modeling Actions through State Changes,
CVPR13(2579-2586)
IEEE DOI
1309
Action Recognition; Egocentric; Object; Semi-Supervised Learning; State
BibRef
Fathi, A.[Alireza],
Farhadi, A.[Ali],
Rehg, J.M.[James M.],
Understanding egocentric activities,
ICCV11(407-414).
IEEE DOI
1201
BibRef
Fathi, A.[Alireza],
Ren, X.F.[Xiao-Feng],
Rehg, J.M.[James M.],
Learning to recognize objects in egocentric activities,
CVPR11(3281-3288).
IEEE DOI
1106
BibRef
Fathi, A.[Alireza],
Mori, G.[Greg],
Action recognition by learning mid-level motion features,
CVPR08(1-8).
IEEE DOI
0806
BibRef
Zhan, K.[Kai],
Ramos, F.,
Faux, S.,
Activity recognition from a wearable camera,
ICARCV12(365-370).
IEEE DOI
1304
BibRef
Ma, M.,
Fan, H.,
Kitani, K.M.[Kris M.],
Going Deeper into First-Person Activity Recognition,
CVPR16(1894-1903)
IEEE DOI
1612
BibRef
Ogaki, K.[Keisuke],
Kitani, K.M.[Kris M.],
Sugano, Y.[Yusuke],
Sato, Y.[Yoichi],
Coupling eye-motion and ego-motion features for first-person activity
recognition,
Egocentric12(1-7).
IEEE DOI
1207
BibRef
Behera, A.[Ardhendu],
Cohn, A.[Anthony],
Hogg, D.C.[David C.],
Real-time Activity Recognition by Discerning Qualitative Relationships
Between Randomly Chosen Visual Features,
BMVC14(xx-yy).
HTML Version.
1410
BibRef
Behera, A.[Ardhendu],
Hogg, D.C.[David C.],
Cohn, A.G.[Anthony G.],
Egocentric Activity Monitoring and Recovery,
ACCV12(III:519-532).
Springer DOI
1304
BibRef
Earlier: A1, A3, A2:
Workflow Activity Monitoring Using Dynamics of Pair-Wise Qualitative
Spatial Relations,
MMMod12(196-209).
Springer DOI
1201
BibRef
Pirsiavash, H.[Hamed],
Ramanan, D.[Deva],
Detecting activities of daily living in first-person camera views,
CVPR12(2847-2854).
IEEE DOI
1208
BibRef
Sundaram, S.[Sudeep],
Mayol-Cuevas, W.W.[Walterio W.],
Egocentric Visual Event Classification with Location-Based Priors,
ISVC10(II: 596-605).
Springer DOI
1011
BibRef
Earlier:
High level activity recognition using low resolution wearable vision,
Egocentric09(25-32).
IEEE DOI
0906
BibRef
Spriggs, E.H.[Ekaterina H.],
de la Torre, F.[Fernando],
Hebert, M.[Martial],
Temporal segmentation and activity classification from first-person
sensing,
Egocentric09(17-24).
IEEE DOI
0906
BibRef
Ren, X.F.[Xiao-Feng],
Philipose, M.[Matthai],
Egocentric recognition of handled objects: Benchmark and analysis,
Egocentric09(1-8).
IEEE DOI
0906
BibRef
Hanheide, M.[Marc],
Hofemann, N.[Nils],
Sagerer, G.[Gerhard],
Action Recognition in a Wearable Assistance System,
ICPR06(II: 1254-1258).
IEEE DOI
0609
BibRef
Yang, A.Y.[Allen Y.],
Iyengar, S.[Sameer],
Kuryloski, P.[Philip],
Jafari, R.[Roozbeh],
Distributed segmentation and classification of human actions using a
wearable motion sensor network,
CVPR4HB08(1-8).
IEEE DOI
0806
BibRef
Chapter on Motion -- Human Motion, Surveillance, Tracking, Surveillance, Activities continues in
Human Action Recognition in Still Images, Single Images .