16.7.4.6.4 Egocentric Human Action Recognition, First Person, Wearable Monitoring

Chapter Contents (Back)
Action Recognition. Human Actions. Egocentric Recognition. Human Motion. See also Egocentric, Wearable Camera Hand Tracking. See also Lifelog.

Stikic, M.[Maja], Larlus, D.[Diane], Ebert, S.[Sandra], Schiele, B.[Bernt],
Weakly Supervised Recognition of Daily Life Activities with Wearable Sensors,
PAMI(33), No. 12, December 2011, pp. 2521-2537.
IEEE DOI 1110
Learning, reduce annotation effort. BibRef

Guo, D.Y.[Dong-Yan], Tang, J.H.[Jin-Hui], Cui, Y.[Ying], Ding, J.[Jundi], Zhao, C.X.[Chun-Xia],
Saliency-based content-aware lifestyle image mosaics,
JVCIR(26), No. 1, 2015, pp. 192-199.
Elsevier DOI 1502
Image mosaic BibRef

Betancourt, A.[Alejandro], Morerio, P., Regazzoni, C.S., Rauterberg, M.,
The Evolution of First Person Vision Methods: A Survey,
CirSysVideo(25), No. 5, May 2015, pp. 744-760.
IEEE DOI 1505
Survey, Egocentric. Cameras BibRef

Yan, Y.[Yan], Ricci, E.[Elisa], Liu, G.[Gaowen], Sebe, N.[Nicu],
Egocentric Daily Activity Recognition via Multitask Clustering,
IP(24), No. 10, October 2015, pp. 2984-2995.
IEEE DOI 1507
BibRef
Earlier:
Recognizing Daily Activities from First-Person Videos with Multi-task Clustering,
ACCV14(IV: 522-537).
Springer DOI 1504
Algorithm design and analysis See also Multitask Linear Discriminant Analysis for View Invariant Action Recognition. BibRef

Alletto, S.[Stefano], Serra, G.[Giuseppe], Calderara, S.[Simone], Cucchiara, R.[Rita],
Understanding social relationships in egocentric vision,
PR(48), No. 12, 2015, pp. 4082-4096.
Elsevier DOI 1509
Egocentric vision BibRef

Alletto, S.[Stefano], Serra, G.[Giuseppe], Cucchiara, R.[Rita],
Video registration in egocentric vision under day and night illumination changes,
CVIU(157), No. 1, 2017, pp. 274-283.
Elsevier DOI 1704
Video registration BibRef

Lu, C.[Cewu], Liao, R.[Renjie], Jia, J.Y.[Jia-Ya],
Personal object discovery in first-person videos,
IP(24), No. 12, December 2015, pp. 5789-5799.
IEEE DOI 1512
cameras BibRef

Buso, V.[Vincent], González-Díaz, I.[Iván], Benois-Pineau, J.[Jenny],
Goal-oriented top-down probabilistic visual attention model for recognition of manipulated objects in egocentric videos,
SP:IC(39, Part B), No. 1, 2015, pp. 418-431.
Elsevier DOI 1512
BibRef
And:
Object recognition with top-down visual attention modeling for behavioral studies,
ICIP15(4431-4435)
IEEE DOI 1512
Saliency maps. Egocentric Vision BibRef

González-Díaz, I.[Iván], Buso, V.[Vincent], Benois-Pineau, J.[Jenny],
Perceptual modeling in the problem of active object recognition in visual scenes,
PR(56), No. 1, 2016, pp. 129-141.
Elsevier DOI 1604
Perceptual modeling BibRef

Karaman, S.[Svebor], Benois-Pineau, J.[Jenny], Megret, R.[Remi], Dovgalecs, V.[Vladislavs], Dartigues, J.F.[Jean-Francois], Gaestel, Y.[Yann],
Human Daily Activities Indexing in Videos from Wearable Cameras for Monitoring of Patients with Dementia Diseases,
ICPR10(4113-4116).
IEEE DOI 1008
BibRef

Pinquier, J.[Julien], Karaman, S.[Svebor], Letoupin, L.[Laetitia], Guyot, P.[Patrice], Megret, R.[Remi], Benois-Pineau, J.[Jenny], Gaestel, Y.[Yann], Dartigues, J.F.[Jean-Francois],
Strategies for multiple feature fusion with Hierarchical HMM: Application to activity recognition from wearable audiovisual sensors,
ICPR12(3192-3195).
WWW Link. 1302
BibRef

Boujut, H.[Hugo], Benois-Pineau, J.[Jenny], Megret, R.[Remi],
Fusion of Multiple Visual Cues for Visual Saliency Extraction from Wearable Camera Settings with Strong Motion,
Concept12(III: 436-445).
Springer DOI 1210
BibRef

Stoian, A.[Andrei], Ferecatu, M.[Marin], Benois-Pineau, J.[Jenny], Crucianu, M.[Michel],
Fast Action Localization in Large-Scale Video Archives,
CirSysVideo(26), No. 10, October 2016, pp. 1917-1930.
IEEE DOI 1610
BibRef
Earlier:
Scalable action localization with kernel-space hashing,
ICIP15(257-261)
IEEE DOI 1512
Histograms. Action localization BibRef

Rituerto, A.[Alejandro],
Modeling the environment with egocentric vision systems,
ELCVIA(14), No. 3, 2015, pp. xx-yy.
DOI Link 1601
Thesis summary. BibRef

Rituerto, A.[Alejandro], Murillo, A.C.[Ana C.], Guerrero, J.J.[José J.],
3D Layout Propagation to Improve Object Recognition in Egocentric Videos,
ACVR14(839-852).
Springer DOI 1504
BibRef

Hong, J.H.[Jin-Hyuk], Ramos, J., Dey, A.K.,
Toward Personalized Activity Recognition Systems With a Semipopulation Approach,
HMS(46), No. 1, February 2016, pp. 101-112.
IEEE DOI 1602
Bayes methods BibRef

Damen, D.[Dima], Leelasawassuk, T.[Teesid], Mayol-Cuevas, W.[Walterio],
You-Do, I-Learn: Egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance,
CVIU(149), No. 1, 2016, pp. 98-112.
Elsevier DOI 1606
Video guidance BibRef

Damen, D.[Dima], Haines, O.[Osian], Leelasawassuk, T.[Teesid], Calway, A.D.[Andrew D.], Mayol-Cuevas, W.W.[Walterio W.],
Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage,
ACVR14(481-492).
Springer DOI 1504
BibRef
And: A1, A3, A2, A4, A5:
You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video,
BMVC14(xx-yy).
HTML Version. 1410
BibRef

Abebe, G.[Girmaw], Cavallaro, A.[Andrea], Parra, X.[Xavier],
Robust multi-dimensional motion features for first-person vision activity recognition,
CVIU(149), No. 1, 2016, pp. 229-248.
Elsevier DOI 1606
Human activity recognition BibRef

Zhu, Z., Satizábal, H.F., Blanke, U., Perez-Uribe, A., Tröster, G.,
Naturalistic Recognition of Activities and Mood Using Wearable Electronics,
AffCom(7), No. 3, July 2016, pp. 272-285.
IEEE DOI 1609
Acceleration BibRef

Gutierrez-Gomez, D.[Daniel], Guerrero, J.J.,
True scaled 6 DoF egocentric localisation with monocular wearable systems,
IVC(52), No. 1, 2016, pp. 178-194.
Elsevier DOI 1609
Monocular SLAM BibRef

Singh, S.[Suriya], Arora, C.[Chetan], Jawahar, C.V.,
Trajectory aligned features for first person action recognition,
PR(62), No. 1, 2017, pp. 45-55.
Elsevier DOI 1705
BibRef
Earlier:
First Person Action Recognition Using Deep Learned Descriptors,
CVPR16(2620-2628)
IEEE DOI 1612
Action and activity recognition BibRef

Vaca-Castano, G.[Gonzalo], Das, S.[Samarjit], Sousa, J.P.[Joao P.], Lobo, N.D.[Niels D.], Shah, M.[Mubarak],
Improved scene identification and object detection on egocentric vision of daily activities,
CVIU(156), No. 1, 2017, pp. 92-103.
Elsevier DOI 1702
Scene classification BibRef

Dimiccoli, M.[Mariella], Bolaños, M.[Marc], Talavera, E.[Estefania], Aghaei, M.[Maedeh], Nikolov, S.G.[Stavri G.], Radeva, P.[Petia],
SR-clustering: Semantic regularized clustering for egocentric photo streams segmentation,
CVIU(155), No. 1, 2017, pp. 55-69.
Elsevier DOI 1702
Temporal segmentation BibRef

del Molino, A.G., Tan, C., Lim, J.H., Tan, A.H.,
Summarization of Egocentric Videos: A Comprehensive Survey,
HMS(47), No. 1, February 2017, pp. 65-76.
IEEE DOI 1702
Survey, Egocentric Analysis. image segmentation BibRef

Brutti, A.[Alessio], Cavallaro, A.[Andrea],
Online Cross-Modal Adaptation for Audio-Visual Person Identification With Wearable Cameras,
HMS(47), No. 1, February 2017, pp. 40-51.
IEEE DOI 1702
audio-visual systems BibRef

Timmons, A.C., Chaspari, T., Han, S.C., Perrone, L., Narayanan, S.S., Margolin, G.,
Using Multimodal Wearable Technology to Detect Conflict among Couples,
Computer(50), No. 3, March 2017, pp. 50-59.
IEEE DOI 1704
Behavioral sciences BibRef

Conti, F.[Francesco], Palossi, D.[Daniele], Andri, R.[Renzo], Magno, M.[Michele], Benini, L.[Luca],
Accelerated Visual Context Classification on a Low-Power Smartwatch,
HMS(47), No. 1, February 2017, pp. 19-30.
IEEE DOI 1702
cameras BibRef

Ortis, A.[Alessandro], Farinella, G.M.[Giovanni M.], d'Amico, V.[Valeria], Addesso, L.[Luca], Torrisi, G.[Giovanni], Battiato, S.[Sebastiano],
Organizing egocentric videos of daily living activities,
PR(72), No. 1, 2017, pp. 207-218.
Elsevier DOI 1708
First person vision BibRef


Lee, J.W.[Jang-Won], Ryoo, M.S.[Michael S.],
Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression,
DeepLearnRV17(472-473)
IEEE DOI 1709
Detectors, Motor drives, Robot control, Robot kinematics, Training, Videos BibRef

Nigam, J., Rameshan, R.M.,
EgoTracker: Pedestrian Tracking with Re-identification in Egocentric Videos,
Odometry17(980-987)
IEEE DOI 1709
Cameras, Optical devices, Optical imaging, Support vector machines, Target tracking, Videos BibRef

Jain, S.[Samriddhi], Rameshan, R.M.[Renu M.], Nigam, A.[Aditya],
Object Triggered Egocentric Video Summarization,
CAIP17(II: 428-439).
Springer DOI 1708
BibRef

Chen, L.F.[Long-Fei], Kondo, K.[Kazuaki], Nakamura, Y.[Yuichi], Damen, D.[Dima], Mayol-Cuevas, W.W.[Walterio W.],
Hotspots detection for machine operation in egocentric vision,
MVA17(223-226)
DOI Link 1708
Cameras, Feature extraction, Magnetic heads, Manuals, Organizations, Shape, Visualization. Where people touch the machines -- buttons, etc. BibRef

Shi, J.,
Connecting the dots: Embodied visual perception from first-person cameras,
MVA17(231-233)
DOI Link 1708
Cameras, Gravity, Semantics, Three-dimensional displays, Trajectory, Visualization BibRef

Zhang, Y.C., Li, Y., Rehg, J.M.[James M.],
First-Person Action Decomposition and Zero-Shot Learning,
WACV17(121-129)
IEEE DOI 1609
Adaptation models, Cameras, Feature extraction, Image recognition, Training, Trajectory, Visualization BibRef

Li, Y.[Yin], Ye, Z.[Zhefan], Rehg, J.M.[James M.],
Delving into egocentric actions,
CVPR15(287-295)
IEEE DOI 1510
BibRef

Cartas, A.[Alejandro], Marín, J.[Juan], Radeva, P.[Petia], Dimiccoli, M.[Mariella],
Recognizing Activities of Daily Living from Egocentric Images,
IbPRIA17(87-95).
Springer DOI 1706
BibRef

Bokhari, S.Z.[Syed Zahir], Kitani, K.M.[Kris M.],
Long-Term Activity Forecasting Using First-Person Vision,
ACCV16(V: 346-360).
Springer DOI 1704
BibRef

Duane, A.[Aaron], Zhou, J.[Jiang], Little, S.[Suzanne], Gurrin, C.[Cathal], Smeaton, A.F.[Alan F.],
An Annotation System for Egocentric Image Media,
MMMod17(II: 442-445).
Springer DOI 1701
BibRef

Mathews, S.M.[Sherin M.], Kambhamettu, C.[Chandra], Barner, K.E.[Kenneth E.],
Maximum Correntropy Based Dictionary Learning Framework for Physical Activity Recognition Using Wearable Sensors,
ISVC16(II: 123-132).
Springer DOI 1701
BibRef

Zhou, Y.[Yang], Ni, B.B.[Bing-Bing], Hong, R.[Richang], Yang, X.K.[Xiao-Kang], Tian, Q.[Qi],
Cascaded Interactional Targeting Network for Egocentric Video Analysis,
CVPR16(1904-1913)
IEEE DOI 1612
BibRef

Zhou, Y.[Yang], Ni, B.B.[Bing-Bing], Hong, R.[Richang], Wang, M.[Meng], Tian, Q.[Qi],
Interaction part mining: A mid-level approach for fine-grained action recognition,
CVPR15(3323-3331)
IEEE DOI 1510
BibRef

Wray, M.[Michael], Moltisanti, D.[Davide], Mayol-Cuevas, W.[Walterio], Damen, D.[Dima],
SEMBED: Semantic Embedding of Egocentric Action Videos,
Egocentric16(I: 532-545).
Springer DOI 1611
BibRef

Al Safadi, E., Mohammad, F., Iyer, D., Smiley, B.J., Jain, N.K.,
Generalized activity recognition using accelerometer in wearable devices for IoT applications,
AVSS16(73-79)
IEEE DOI 1611
Accelerometers BibRef

Bai, C., Reibman, A.R.,
Characterizing distortions in first-person videos,
ICIP16(2440-2444)
IEEE DOI 1610
Cameras BibRef

Singh, K.K., Fatahalian, K., Efros, A.A.,
KrishnaCam: Using a longitudinal, single-person, egocentric dataset for scene understanding tasks,
WACV16(1-9)
IEEE DOI 1606
Cameras BibRef

Lin, Y.W.[Yue-Wei], Abdelfatah, K.[Kareem], Zhou, Y.J.[You-Jie], Fan, X.C.[Xiao-Chuan], Yu, H.K.[Hong-Kai], Qian, H.[Hui], Wang, S.[Song],
Co-Interest Person Detection from Multiple Wearable Camera Videos,
ICCV15(4426-4434)
IEEE DOI 1602
BibRef

Xiong, B., Kim, G., Sigal, L.,
Storyline Representation of Egocentric Videos with an Applications to Story-Based Search,
ICCV15(4525-4533)
IEEE DOI 1602
Search problems; Semantics; TV; Training; Videos; Visualization; YouTube BibRef

Zhou, Y., Berg, T.L.,
Temporal Perception and Prediction in Ego-Centric Video,
ICCV15(4498-4506)
IEEE DOI 1602
Computational modeling BibRef

Furnari, A.[Antonino], Farinella, G.M.[Giovanni Maria], Battiato, S.[Sebastiano],
Recognizing Personal Locations From Egocentric Videos,
HMS(47), No. 1, February 2017, pp. 6-18.
IEEE DOI 1702
BibRef
Earlier:
Temporal Segmentation of Egocentric Videos to Highlight Personal Locations of Interest,
Egocentric16(I: 474-489).
Springer DOI 1611
BibRef
Earlier:
Recognizing Personal Contexts from Egocentric Images,
ACVR15(393-401)
IEEE DOI 1602
Biomedical monitoring BibRef

Grauman, K.[Kristen],
Action and Attention in First-person Vision,
BMVC15(xx-yy).
DOI Link 1601
BibRef

Misawa, H.[Hiroki], Obara, T.[Takashi], Iyatomi, H.[Hitoshi],
Automated Habit Detection System: A Feasibility Study,
ISVC15(II: 16-23).
Springer DOI 1601
BibRef

Betancourt, A.[Alejandro], Morerio, P.[Pietro], Marcenaro, L.[Lucio], Rauterberg, M.[Matthias], Regazzoni, C.S.[Carlo S.],
Filtering SVM frame-by-frame binary classification in a detection framework,
ICIP15(2552-2556)
IEEE DOI 1512
Bayesian Filtering. to deal with image changes. BibRef

Vaca-Castano, G.[Gonzalo], Das, S.[Samarjit], Sousa, J.P.[Joao P.],
Improving egocentric vision of daily activities,
ICIP15(2562-2566)
IEEE DOI 1512
Activities of Daily Living BibRef

Debarba, H.G., Molla, E., Herbelin, B., Boulic, R.,
Characterizing embodied interaction in First and Third Person Perspective viewpoints,
3DUI15(67-72)
IEEE DOI 1511
human computer interaction BibRef

Ryoo, M.S., Rothrock, B.[Brandon], Matthies, L.H.[Larry H.],
Pooled motion features for first-person videos,
CVPR15(896-904)
IEEE DOI 1510
BibRef

Rhinehart, N., Kitani, K.M.[Kris M.],
Learning Action Maps of Large Environments via First-Person Vision,
CVPR16(580-588)
IEEE DOI 1612
BibRef

Yonetani, R.[Ryo], Kitani, K.M.[Kris M.], Sato, Y.[Yoichi],
Recognizing Micro-Actions and Reactions from Paired Egocentric Videos,
CVPR16(2629-2638)
IEEE DOI 1612
BibRef
Earlier:
Ego-surfing first person videos,
CVPR15(5445-5454)
IEEE DOI 1510
BibRef

Poleg, Y.[Yair], Ephrat, A., Peleg, S.[Shmuel], Arora, C.[Chetan],
Compact CNN for indexing egocentric videos,
WACV16(1-9)
IEEE DOI 1606
Cameras BibRef

Poleg, Y.[Yair], Halperin, T.[Tavi], Arora, C.[Chetan], Peleg, S.[Shmuel],
EgoSampling: Fast-forward and stereo for egocentric videos,
CVPR15(4768-4776)
IEEE DOI 1510
BibRef

Rogez, G.[Gregory], Supancic, J.S.[James S.], Ramanan, D.[Deva],
First-person pose recognition using egocentric workspaces,
CVPR15(4325-4333)
IEEE DOI 1510
BibRef

Bolaños, M.[Marc], Garolera, M.[Maite], Radeva, P.[Petia],
Object Discovery Using CNN Features in Egocentric Videos,
IbPRIA15(67-74).
Springer DOI 1506
BibRef

Oliveira-Barra, G.[Gabriel], Dimiccoli, M.[Mariella], Radeva, P.[Petia],
Leveraging Activity Indexing for Egocentric Image Retrieval,
IbPRIA17(295-303).
Springer DOI 1706
BibRef

Talavera, E.[Estefania], Dimiccoli, M.[Mariella], Bolaños, M.[Marc], Aghaei, M.[Maedeh], Radeva, P.[Petia],
R-Clustering for Egocentric Video Segmentation,
IbPRIA15(327-336).
Springer DOI 1506
BibRef

Soran, B.[Bilge], Farhadi, A.[Ali], Shapiro, L.G.[Linda G.],
Generating Notifications for Missing Actions: Don't Forget to Turn the Lights Off!,
ICCV15(4669-4677)
IEEE DOI 1602
BibRef
Earlier:
Action Recognition in the Presence of One Egocentric and Multiple Static Cameras,
ACCV14(V: 178-193).
Springer DOI 1504
Cameras BibRef

Song, S.[Sibo], Chandrasekhar, V.[Vijay], Mandal, B., Li, L.Y.[Li-Yuan], Lim, J.H.[Joo-Hwee], Babu, G.S., San, P.P., Cheung, N.M.[Ngai-Man],
Multimodal Multi-Stream Deep Learning for Egocentric Activity Recognition,
Egocentric-C16(378-385)
IEEE DOI 1612
BibRef

Song, S.[Sibo], Chandrasekhar, V.[Vijay], Cheung, N.M.[Ngai-Man], Narayan, S.[Sanath], Li, L.Y.[Li-Yuan], Lim, J.H.[Joo-Hwee],
Activity Recognition in Egocentric Life-Logging Videos,
IMEV14(445-458).
Springer DOI 1504
BibRef

Zheng, K.[Kang], Lin, Y.[Yuewei], Zhou, Y.[Youjie], Salvi, D.[Dhaval], Fan, X.C.[Xiao-Chuan], Guo, D.[Dazhou], Meng, Z.[Zibo], Wang, S.[Song],
Video-Based Action Detection Using Multiple Wearable Cameras,
ChaLearn14(727-741).
Springer DOI 1504
BibRef

Lin, Y.[Yizhou], Hua, G.[Gang], Mordohai, P.[Philippos],
Egocentric Object Recognition Leveraging the 3D Shape of the Grasping Hand,
ACVR14(746-762).
Springer DOI 1504
BibRef

Yan, Y.[Yan], Ricci, E.[Elisa], Rostamzadeh, N.[Negar], Sebe, N.[Nicu],
It's all about habits: Exploiting multi-task clustering for activities of daily living analysis,
ICIP14(1071-1075)
IEEE DOI 1502
Algorithm design and analysis BibRef

Moghimi, M.[Mohammad], Wu, W.[Wanmin], Chen, J.[Jacqueline], Godbole, S.[Suneeta], Marshall, S.[Simon], Kerr, J.[Jacqueline], Belongie, S.J.[Serge J.],
Analyzing sedentary behavior in life-logging images,
ICIP14(1011-1015)
IEEE DOI 1502
Accuracy BibRef

Zhou, J.[Jiang], Duane, A.[Aaron], Albatal, R.[Rami], Gurrin, C.[Cathal], Johansen, D.[Dag],
Wearable Cameras for Real-Time Activity Annotation,
MMMod15(II: 319-322).
Springer DOI 1501
Personal (Big) Data Modeling for Information Access & Retrieval BibRef

Wang, P.[Peng], Smeaton, A.F.[Alan F.], Gurrin, C.[Cathal],
Factorizing Time-Aware Multi-way Tensors for Enhancing Semantic Wearable Sensing,
MMMod15(I: 571-582).
Springer DOI 1501
BibRef

Frinken, V.[Volkmar], Iwakiri, Y.[Yutaro], Ishida, R.[Ryosuke], Fujisaki, K.[Kensho], Uchida, S.[Seiichi],
Improving Point of View Scene Recognition by Considering Textual Data,
ICPR14(2966-2971)
IEEE DOI 1412
Cameras BibRef

Xia, L.[Lu], Gori, I.[Ilaria], Aggarwal, J.K., Ryoo, M.S.,
Robot-centric Activity Recognition from First-Person RGB-D Videos,
WACV15(357-364)
IEEE DOI 1503
Cameras BibRef

Iwashita, Y.[Yumi], Takamine, A.[Asamichi], Kurazume, R.[Ryo], Ryoo, M.S.,
First-Person Animal Activity Recognition from Egocentric Videos,
ICPR14(4310-4315)
IEEE DOI 1412
Animals BibRef

Matsuo, K.[Kenji], Yamada, K.[Kentaro], Ueno, S.[Satoshi], Naito, S.[Sei],
An Attention-Based Activity Recognition for Egocentric Video,
Egocentric14(565-570)
IEEE DOI 1409
BibRef

Alletto, S.[Stefano], Serra, G.[Giuseppe], Calderara, S.[Simone], Cucchiara, R.[Rita],
Head Pose Estimation in First-Person Camera Views,
ICPR14(4188-4193)
IEEE DOI 1412
Cameras BibRef

Alletto, S.[Stefano], Serra, G.[Giuseppe], Calderara, S.[Simone], Solera, F.[Francesco], Cucchiara, R.[Rita],
From Ego to Nos-Vision: Detecting Social Relationships in First-Person Views,
Egocentric14(594-599)
IEEE DOI 1409
Ego-vision;f-formation;social interaction BibRef

Moghimi, M.[Mohammad], Azagra, P.[Pablo], Montesano, L.[Luis], Murillo, A.C.[Ana C.], Belongie, S.J.[Serge J.],
Experiments on an RGB-D Wearable Vision System for Egocentric Activity Recognition,
Egocentric14(611-617)
IEEE DOI 1409
BibRef

Narayan, S.[Sanath], Ramakrishnan, K.R.[Kalpathi R.],
A Cause and Effect Analysis of Motion Trajectories for Modeling Actions,
CVPR14(2633-2640)
IEEE DOI 1409
Action Recognition; Granger Causality; Interaction BibRef

Narayan, S.[Sanath], Kankanhalli, M.S.[Mohan S.], Ramakrishnan, K.R.[Kalpathi R.],
Action and Interaction Recognition in First-Person Videos,
Egocentric14(526-532)
IEEE DOI 1409
First-person video;Interaction recognition BibRef

Tan, C.[Cheston], Goh, H.[Hanlin], Chandrasekhar, V.[Vijay], Li, L.Y.[Li-Yuan], Lim, J.H.[Joo-Hwee],
Understanding the Nature of First-Person Videos: Characterization and Classification Using Low-Level Features,
Egocentric14(549-556)
IEEE DOI 1409
BibRef

McCandless, T.[Tomas], Grauman, K.[Kristen],
Object-Centric Spatio-Temporal Pyramids for Egocentric Activity Recognition,
BMVC13(xx-yy).
DOI Link 1402
BibRef

Fathi, A.[Alireza], Rehg, J.M.[James M.],
Modeling Actions through State Changes,
CVPR13(2579-2586)
IEEE DOI 1309
Action Recognition; Egocentric; Object; Semi-Supervised Learning; State BibRef

Fathi, A.[Alireza], Farhadi, A.[Ali], Rehg, J.M.[James M.],
Understanding egocentric activities,
ICCV11(407-414).
IEEE DOI 1201
BibRef

Fathi, A.[Alireza], Ren, X.F.[Xiao-Feng], Rehg, J.M.[James M.],
Learning to recognize objects in egocentric activities,
CVPR11(3281-3288).
IEEE DOI 1106
BibRef

Fathi, A.[Alireza], Mori, G.[Greg],
Action recognition by learning mid-level motion features,
CVPR08(1-8).
IEEE DOI 0806
BibRef

Zhan, K.[Kai], Ramos, F., Faux, S.,
Activity recognition from a wearable camera,
ICARCV12(365-370).
IEEE DOI 1304
BibRef

Ma, M., Fan, H., Kitani, K.M.[Kris M.],
Going Deeper into First-Person Activity Recognition,
CVPR16(1894-1903)
IEEE DOI 1612
BibRef

Ogaki, K.[Keisuke], Kitani, K.M.[Kris M.], Sugano, Y.[Yusuke], Sato, Y.[Yoichi],
Coupling eye-motion and ego-motion features for first-person activity recognition,
Egocentric12(1-7).
IEEE DOI 1207
BibRef

Behera, A.[Ardhendu], Hogg, D.C.[David C.], Cohn, A.G.[Anthony G.],
Egocentric Activity Monitoring and Recovery,
ACCV12(III:519-532).
Springer DOI 1304
BibRef
Earlier: A1, A3, A2:
Workflow Activity Monitoring Using Dynamics of Pair-Wise Qualitative Spatial Relations,
MMMod12(196-209).
Springer DOI 1201
BibRef

Sundaram, S.[Sudeep], Mayol-Cuevas, W.W.[Walterio W.],
Egocentric Visual Event Classification with Location-Based Priors,
ISVC10(II: 596-605).
Springer DOI 1011
BibRef
Earlier:
High level activity recognition using low resolution wearable vision,
Egocentric09(25-32).
IEEE DOI 0906
BibRef

Spriggs, E.H.[Ekaterina H.], de la Torre, F.[Fernando], Hebert, M.[Martial],
Temporal segmentation and activity classification from first-person sensing,
Egocentric09(17-24).
IEEE DOI 0906
BibRef

Ren, X.F.[Xiao-Feng], Philipose, M.[Matthai],
Egocentric recognition of handled objects: Benchmark and analysis,
Egocentric09(1-8).
IEEE DOI 0906
BibRef

Hanheide, M.[Marc], Hofemann, N.[Nils], Sagerer, G.[Gerhard],
Action Recognition in a Wearable Assistance System,
ICPR06(II: 1254-1258).
IEEE DOI 0609
BibRef

Yang, A.Y.[Allen Y.], Iyengar, S.[Sameer], Kuryloski, P.[Philip], Jafari, R.[Roozbeh],
Distributed segmentation and classification of human actions using a wearable motion sensor network,
CVPR4HB08(1-8).
IEEE DOI 0806
BibRef

Chapter on Motion -- Feature-Based, Long Range, Motion and Structure Estimates, Tracking, Surveillance, Activities continues in
Human Action Recognition in Still Images, Single Images .


Last update:Nov 11, 2017 at 13:31:57