Nguyen, A.[Anthony],
Chandran, V.[Vinod],
Sridharan, S.[Sridha],
Gaze tracking for region of interest coding in JPEG 2000,
SP:IC(21), No. 5, June 2006, pp. 359-377.
Elsevier DOI
0606
BibRef
Earlier:
Visual attention based ROI maps from gaze tracking data,
ICIP04(V: 3495-3498).
IEEE DOI
0505
Image coding; Importance map; JPEG 2000; Region of interest
BibRef
Smith, K.[Kevin],
Ba, S.O.[Sileye O.],
Odobez, J.M.[Jean-Marc],
Gatica-Perez, D.[Daniel],
Tracking the Visual Focus of Attention for a Varying Number of
Wandering People,
PAMI(30), No. 7, July 2008, pp. 1212-1229.
IEEE DOI
0806
BibRef
Sun, Y.[Yaoru],
Fisher, R.B.[Robert B.],
Object-based visual attention for computer vision,
AI(146), No. 1, May 2003, pp. 77-123.
Elsevier DOI
0306
BibRef
Sun, Y.[Yaoru],
Fisher, R.B.[Robert B.],
Wang, F.[Fang],
Gomes, H.M.[Herman Martins],
A computer vision model for visual-object-based attention and eye
movements,
CVIU(112), No. 2, November 2008, pp. 126-142.
Elsevier DOI
0811
Visual-object-based competition; Space-based attention; Object-based
attention; Group-based attention; Foveated imaging; Attention-driven
eye movements
BibRef
Marat, S.[Sophie],
Phuoc, T.H.[Tien Ho],
Granjon, L.[Lionel],
Guyader, N.[Nathalie],
Pellerin, D.[Denis],
Guérin-Dugué, A.[Anne],
Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short
Videos,
IJCV(82), No. 3, May 2009, pp. xx-yy.
Springer DOI
0903
Predict eyemovement in video viewing. I.e. what is salient to the viewer.
BibRef
Le Meur, O.,
Chevet, J.C.,
Relevance of a Feed-Forward Model of Visual Attention for Goal-Oriented
and Free-Viewing Tasks,
IP(19), No. 11, November 2010, pp. 2801-2813.
IEEE DOI
1011
Analysis of eye-tracking results. Attention.
BibRef
Le Meur, O.[Olivier],
Le Callet, P.[Patrick],
What we see is most likely to be what matters:
Visual attention and applications,
ICIP09(3085-3088).
IEEE DOI
0911
BibRef
Li, Z.C.[Zhi-Cheng],
Qin, S.Y.[Shi-Yin],
Itti, L.[Laurent],
Visual attention guided bit allocation in video compression,
IVC(29), No. 1, January 2011, pp. 1-14.
Elsevier DOI
1011
Visual attention; Video compression; Eye-tracking; Video subjective quality
BibRef
Garcia-Diaz, A.[Antón],
Fdez-Vidal, X.R.[Xosé R.],
Pardo, X.M.[Xosé M.],
Dosil, R.[Raquel],
Saliency from hierarchical adaptation through decorrelation and
variance normalization,
IVC(30), No. 1, January 2012, pp. 51-64.
Elsevier DOI
1202
Saliency; Bottom-up; Eye fixations; Decorrelation; Whitening; Visual attention
BibRef
Bonev, B.,
Chuang, L.L.,
Escolano, F.,
How do image complexity, task demands and looking biases influence
human gaze behavior?,
PRL(34), No. 7, 1 May 2013, pp. 723-730.
Elsevier DOI
1303
Perceptual search; Attention guidance in scenes; Image complexity
BibRef
Kimura, A.[Akisato],
Yonetani, R.[Ryo],
Hirayama, T.[Takatsugu],
Computational Models of Human Visual Attention and Their Implementations:
A Survey,
IEICE(E96-D), No. 3, March 2013, pp. 562-578.
WWW Link.
1303
Survey, Attention.
BibRef
Rajashekar, U.,
van der Linde, I.,
Bovik, A.C.,
Cormack, L.K.,
GAFFE: A Gaze-Attentive Fixation Finding Engine,
IP(17), No. 4, April 2008, pp. 564-573.
IEEE DOI
0803
BibRef
Earlier:
Foveated Analysis and Selection of Visual Fixations in Natural Scenes,
ICIP06(453-456).
IEEE DOI
0610
BibRef
Earlier: A1, A3, A4, Only:
Image features that draw fixations,
ICIP03(III: 313-316).
IEEE DOI
0312
BibRef
Yucel, Z.,
Salah, A.A.,
Mericli, C.,
Mericli, T.,
Valenti, R.,
Gevers, T.,
Joint Attention by Gaze Interpolation and Saliency,
Cyber(43), No. 3, 2013, pp. 829-842.
IEEE DOI
1307
gaze direction interpolation; head pose; gaze following
BibRef
Asteriadis, S.[Stylianos],
Karpouzis, K.[Kostas],
Kollias, S.[Stefanos],
Visual Focus of Attention in Non-calibrated Environments using Gaze
Estimation,
IJCV(107), No. 3, May 2014, pp. 293-316.
Springer DOI
1404
BibRef
Earlier:
Robust validation of Visual Focus of Attention using adaptive fusion of
head and eye gaze patterns,
HCI11(414-421).
IEEE DOI
1201
BibRef
Asteriadis, S.[Stylianos],
Tzouveli, P.[Paraskevi],
Karpouzis, K.[Kostas],
Kollias, S.[Stefanos],
A non-intrusive method for user focus of attention estimation in front
of a computer monitor,
FG08(1-2).
IEEE DOI
0809
BibRef
Apostolakis, K.C.[Konstantinos C.],
Daras, P.[Petros],
A framework for implicit human-centered image tagging inspired by
attributed affect,
VC(30), No. 10, October 2014, pp. 1093-1106.
Springer DOI
1410
Monitor user gaze in annotation work to find what is important.
BibRef
Sun, X.S.[Xiao-Shuai],
Yao, H.X.[Hong-Xun],
Ji, R.R.[Rong-Rong],
Liu, X.M.,
Toward Statistical Modeling of Saccadic Eye-Movement and Visual
Saliency,
IP(23), No. 11, November 2014, pp. 4649-4662.
IEEE DOI
1410
Analytical models
BibRef
Earlier: A1, A2, A3, Only:
What are we looking for: Towards statistical modeling of saccadic eye
movements and visual saliency,
CVPR12(1552-1559).
IEEE DOI
1208
BibRef
He, X.[Xin],
Samuelson, F.[Frank],
Zeng, R.P.[Rong-Ping],
Sahiner, B.[Berkman],
Discovering intrinsic properties of human observers' visual search
and mathematical observers' scanning,
JOSA-A(31), No. 11, November 2014, pp. 2495-2510.
DOI Link
1411
Detection; Detection; Vision modeling ; Psychophysics
BibRef
Amano, K.[Kinjiro],
Foster, D.H.[David H.],
Influence of local scene color on fixation position in visual search,
JOSA-A(31), No. 4, April 2014, pp. A254-A262.
DOI Link
1404
Color vision; Detection; Vision-eye movements
BibRef
Shen, C.,
Huang, X.,
Zhao, Q.,
Predicting Eye Fixations on Webpage With an Ensemble of Early
Features and High-Level Representations from Deep Network,
MultMed(17), No. 11, November 2015, pp. 2084-2093.
IEEE DOI
1511
Computational modeling
BibRef
Sheikhi, S.[Samira],
Odobez, J.M.[Jean-Marc],
Combining dynamic head pose-gaze mapping with the robot
conversational state for attention recognition in human-robot
interactions,
PRL(66), No. 1, 2015, pp. 81-90.
Elsevier DOI
1511
Attention recognition
BibRef
Kruthiventi, S.S.S.,
Ayush, K.,
Babu, R.V.,
DeepFix:
A Fully Convolutional Neural Network for Predicting Human Eye Fixations,
IP(26), No. 9, September 2017, pp. 4446-4456.
IEEE DOI
1708
learning (artificial intelligence), neural nets,
DeepFix, bottom-up mechanism,
fully convolutional neural network, hand-crafted features,
human eye fixation prediction,
human visual attention mechanism,
location-biased convolutional layer,
location-dependent pattern modelling, network layers,
neuroscience, receptive fields, saliency data sets, saliency map,
saliency prediction, Biological neural networks,
Computational modeling, Convolution, Feature extraction,
Predictive models, Semantics, Visualization, Saliency prediction,
convolutional neural network, deep learning, eye, fixations
BibRef
Wang, K.[Kang],
Ji, Q.A.[Qi-Ang],
3D gaze estimation without explicit personal calibration,
PR(79), 2018, pp. 216-227.
Elsevier DOI
1804
Gaze estimation, Implicit calibration, Natural constraints, Human computer interaction
BibRef
Chen, J.X.[Ji-Xu],
Ji, Q.A.[Qi-Ang],
Probabilistic gaze estimation without active personal calibration,
CVPR11(609-616).
IEEE DOI
1106
BibRef
And:
3D gaze estimation with a single camera without IR illumination,
ICPR08(1-4).
IEEE DOI
0812
BibRef
Maio, W.[William],
Chen, J.X.[Ji-Xu],
Ji, Q.A.[Qi-Ang],
Constraint-based gaze estimation without active calibration,
FG11(627-631).
IEEE DOI
1103
BibRef
Han, J.,
Zhang, D.,
Wen, S.,
Guo, L.,
Liu, T.,
Li, X.,
Two-Stage Learning to Predict Human Eye Fixations via SDAEs,
Cyber(46), No. 2, February 2016, pp. 487-498.
IEEE DOI
1601
Computational modeling
BibRef
Cheon, M.[Manri],
Lee, J.S.[Jong-Seok],
Temporal resolution vs. visual saliency in videos:
Analysis of gaze patterns and evaluation of saliency models,
SP:IC(39, Part B), No. 1, 2015, pp. 405-417.
Elsevier DOI
1512
Temporal scalability
BibRef
Zhang, J.M.[Jian-Ming],
Sclaroff, S.[Stan],
Exploiting Surroundedness for Saliency Detection: A Boolean Map
Approach,
PAMI(38), No. 5, May 2016, pp. 889-902.
IEEE DOI
1604
BibRef
Earlier:
Saliency Detection: A Boolean Map Approach,
ICCV13(153-160)
IEEE DOI
1403
feature extraction.
eye fixation; salient object detection; visual saliency
BibRef
Le Meur, O.,
Coutrot, A.,
Liu, Z.,
Rämä, P.,
Le Roch, A.,
Helo, A.,
Visual Attention Saccadic Models Learn to Emulate Gaze Patterns From
Childhood to Adulthood,
IP(26), No. 10, October 2017, pp. 4777-4789.
IEEE DOI
1708
gaze tracking, image sequences,
age-dependent saccadic model, age-dependent scan paths,
gaze patterns,
visual attention saccadic models, Adaptation models,
Computational modeling, Dispersion, Hidden Markov models,
Observers, Predictive models, Visualization, Saccadic model,
BibRef
Qiu, W.,
Gao, X.,
Han, B.,
Eye Fixation Assisted Video Saliency Detection via Total
Variation-Based Pairwise Interaction,
IP(27), No. 10, October 2018, pp. 4724-4739.
IEEE DOI
1808
eye, feature extraction, image colour analysis, image segmentation,
image sequences, learning (artificial intelligence),
pairwise interaction
BibRef
Cornia, M.[Marcella],
Baraldi, L.[Lorenzo],
Serra, G.,
Cucchiara, R.[Rita],
Predicting Human Eye Fixations via an LSTM-Based Saliency Attentive
Model,
IP(27), No. 10, October 2018, pp. 5142-5154.
IEEE DOI
1808
eye, Gaussian processes, neural nets, object detection,
Gaussian functions, neural attentive mechanisms, saliency maps,
deep learning
BibRef
Xu, M.,
Ren, Y.,
Wang, Z.,
Liu, J.,
Tao, X.,
Saliency Detection in Face Videos: A Data-Driven Approach,
MultMed(20), No. 6, June 2018, pp. 1335-1349.
IEEE DOI
1805
Face, Feature extraction, Gaze tracking, Mouth, Object detection,
Videos, Visualization, Face video, Gaussian mixture model, visual attention
BibRef
Zhang, M.M.[Meng-Mi],
Ma, K.T.[Keng Teck],
Lim, J.H.[Joo Hwee],
Zhao, Q.[Qi],
Feng, J.S.[Jia-Shi],
Anticipating Where People will Look Using Adversarial Networks,
PAMI(41), No. 8, August 2019, pp. 1783-1796.
IEEE DOI
1907
BibRef
Earlier:
Deep Future Gaze:
Gaze Anticipation on Egocentric Videos Using Adversarial Networks,
CVPR17(3539-3548)
IEEE DOI
1711
BibRef
Earlier: A1, A2, A3, A4, Only:
Foveated neural network: Gaze prediction on egocentric videos,
ICIP17(3720-3724)
IEEE DOI
1803
Task analysis, Predictive models, Streaming media, Generators,
Visualization, Generative adversarial networks, Training,
visual attention.
Generators, Solid modeling,
Training, Videos.
Convolution, Feature extraction, Image resolution, Neural networks,
Egocentric Videos, Fovea.
BibRef
Xia, C.,
Han, J.,
Qi, F.,
Shi, G.,
Predicting Human Saccadic Scanpaths Based on Iterative Representation
Learning,
IP(28), No. 7, July 2019, pp. 3502-3515.
IEEE DOI
1906
Bayes methods, biomechanics, eye, feature extraction,
learning (artificial intelligence), object detection,
representation learning
BibRef
Sun, W.J.[Wan-Jie],
Chen, Z.Z.[Zhen-Zhong],
Wu, F.[Feng],
Visual Scanpath Prediction Using IOR-ROI Recurrent Mixture Density
Network,
PAMI(43), No. 6, June 2021, pp. 2101-2118.
IEEE DOI
2106
Eye movements when scanning the visual field for acquiring and
receiving visual information.
Visualization, Predictive models, Computational modeling,
Feature extraction, Hidden Markov models, Solid modeling,
mixture density network
BibRef
Zhang, L.M.[Lu-Ming],
Liang, R.H.[Rong-Hua],
Yin, J.W.[Jian-Wei],
Zhang, D.X.[Dong-Xiang],
Shao, L.[Ling],
Scene Categorization by Deeply Learning Gaze Behavior in a
Semisupervised Context,
Cyber(51), No. 8, August 2021, pp. 4265-4276.
IEEE DOI
2108
Semantics, Feature extraction, Sparse matrices, Visualization,
Training, Kernel, Image recognition, Deep model, machine learning,
semisupervised
BibRef
Ren, D.[Dakai],
Chen, J.Z.[Jia-Zhong],
Zhong, J.[Jian],
Lu, Z.M.[Zhao-Ming],
Jia, T.[Tao],
Li, Z.Y.[Zong-Yi],
Gaze estimation via bilinear pooling-based attention networks,
JVCIR(81), 2021, pp. 103369.
Elsevier DOI
2112
Gaze tracking, Deep learning, Bilinear pooling, Attention
BibRef
Dai, L.H.[Li-Hong],
Liu, J.G.[Jin-Guo],
Ju, Z.J.[Zhao-Jie],
Binocular Feature Fusion and Spatial Attention Mechanism Based Gaze
Tracking,
HMS(52), No. 2, April 2022, pp. 302-311.
IEEE DOI
2203
Gaze tracking, Feature extraction, Databases, Solid modeling, Faces,
Convolution, Predictive models, Attention mechanism,
gaze tracking
BibRef
Lai, Q.X.[Qiu-Xia],
Zeng, A.[Ailing],
Wang, Y.[Ye],
Cao, L.H.[Li-Hong],
Li, Y.[Yu],
Xu, Q.[Qiang],
Self-Supervised Video Representation Learning via Capturing Semantic
Changes Indicated by Saccades,
CirSysVideo(34), No. 8, August 2024, pp. 6634-6645.
IEEE DOI
2408
Semantics, Spatiotemporal phenomena, Visualization, Task analysis,
Representation learning, Neuroscience, Unsupervised learning,
bio-inspired
BibRef
Sun, Y.[Yinan],
Min, X.K.[Xiong-Kuo],
Duan, H.Y.[Hui-Yu],
Zhai, G.T.[Guang-Tao],
How is Visual Attention Influenced by Text Guidance? Database and
Model,
IP(33), 2024, pp. 5392-5407.
IEEE DOI Code:
WWW Link.
2410
Visualization, Databases, Predictive models, Visual databases,
Feature extraction, Gaze tracking, Computational modeling, multimodal fusion
BibRef
Song, X.Y.[Xin-Yuan],
Guo, S.X.[Shao-Xiang],
Yu, Z.[Zhenfu],
Dong, J.Y.[Jun-Yu],
An Encoder-Decoder Network with Residual and Attention Blocks for
Full-Face 3D Gaze Estimation,
ICIVC22(713-717)
IEEE DOI
2301
Convolution, Computational modeling, Estimation, Task analysis,
Spatial resolution, Image reconstruction, gaze estimation,
attention blocks
BibRef
Murthy, L.R.D.,
Biswas, P.[Pradipta],
Appearance-based Gaze Estimation using Attention and Difference
Mechanism,
Gaze21(3137-3146)
IEEE DOI
2109
Deep learning, Adaptation models,
Estimation, Lighting, Feature extraction
BibRef
Ralekar, C.[Chetan],
Gandhi, T.K.[Tapan Kumar],
Chaudhury, S.[Santanu],
Collaborative Human Machine Attention Module for Character
Recognition,
ICPR21(9874-9880)
IEEE DOI
2105
Use results of eye-tracking to choose where.
Visualization, Analytical models,
Computational modeling, Machine vision, Collaboration,
eye-tracking
BibRef
Sümer, Ö.,
Gerjets, P.,
Trautwein, U.,
Kasneci, E.,
Attention Flow: End-to-End Joint Attention Estimation,
WACV20(3316-3325)
IEEE DOI
2006
Estimation, Videos, Face, Task analysis, Visualization, Psychology
BibRef
Berga, D.,
Vidal, X.R.F.,
Otazu, X.,
Pardo, X.M.,
SID4VAM: A Benchmark Dataset With Synthetic Images for Visual
Attention Modeling,
ICCV19(8788-8797)
IEEE DOI
2004
Dataset, Gaze Tracking. gaze tracking, learning (artificial intelligence), neural nets,
SID4VAM, visual attention modeling, saliency metrics, Benchmark testing
BibRef
Cordel, II, M.O.[Macario O.],
Fan, S.J.[Shao-Jing],
Shen, Z.Q.[Zhi-Qi],
Kankanhalli, M.S.[Mohan S.],
Emotion-Aware Human Attention Prediction,
CVPR19(4021-4030).
IEEE DOI
2002
BibRef
Chen, J.,
Li, Q.,
Wu, W.,
Ling, H.,
Wu, L.,
Zhang, B.,
Li, P.,
Saliency Detection via Topological Feature Modulated Deep Learning,
ICIP19(1630-1634)
IEEE DOI
1910
Topological feature, Convolutional neural network, Eye fixation,
Saliency detection
BibRef
Huang, Y.F.[Yi-Fei],
Cai, M.J.[Min-Jie],
Li, Z.Q.[Zhen-Qiang],
Sato, Y.[Yoichi],
Predicting Gaze in Egocentric Video by Learning Task-Dependent
Attention Transition,
ECCV18(II: 789-804).
Springer DOI
1810
BibRef
Zhang, R.H.[Ruo-Han],
Liu, Z.D.[Zhuo-De],
Zhang, L.X.[Lu-Xin],
Whritner, J.A.[Jake A.],
Muller, K.S.[Karl S.],
Hayhoe, M.M.[Mary M.],
Ballard, D.H.[Dana H.],
AGIL: Learning Attention from Human for Visuomotor Tasks,
ECCV18(XI: 692-707).
Springer DOI
1810
Where is the person looking for robot learning a task from a person.
BibRef
He, X.C.[Xuan-Chao],
Liu, Z.J.[Zhe-Jun],
A Novel Way of Estimating a User's Focus of Attention in a Virtual
Environment,
VAMR18(I: 71-81).
Springer DOI
1807
BibRef
Ngo, T.,
Manjunath, B.S.,
Saccade gaze prediction using a recurrent neural network,
ICIP17(3435-3439)
IEEE DOI
1803
Computational modeling, Feature extraction, Gaze tracking,
Hidden Markov models, Logic gates, Training, Visualization,
scanpath
BibRef
Ren, Y.,
Wang, Z.,
Xu, M.,
Dong, H.,
Li, S.,
Learning Dynamic GMM for Attention Distribution on Single-Face Videos,
MotionRep17(1632-1641)
IEEE DOI
1709
Databases, Face, Gaze tracking, Mouth,
Pattern, recognition
BibRef
Jensen, R.R.[Rasmus R.],
Stets, J.D.[Jonathan D.],
Suurmets, S.[Seidi],
Clement, J.[Jesper],
Aanæs, H.[Henrik],
Wearable Gaze Trackers: Mapping Visual Attention in 3D,
SCIA17(I: 66-76).
Springer DOI
1706
BibRef
Kruthiventi, S.S.S.,
Gudisa, V.,
Dholakiya, J.H.,
Babu, R.V.,
Saliency Unified: A Deep Architecture for simultaneous Eye Fixation
Prediction and Salient Object Segmentation,
CVPR16(5781-5790)
IEEE DOI
1612
BibRef
Volokitin, A.,
Gygli, M.,
Boix, X.,
Predicting When Saliency Maps are Accurate and Eye Fixations
Consistent,
CVPR16(544-552)
IEEE DOI
1612
BibRef
Kübler, T.[Thomas],
Fuhl, W.[Wolfgang],
Rosenberg, R.[Raphael],
Rosenstiel, W.[Wolfgang],
Kasneci, E.[Enkelejda],
Novel Methods for Analysis and Visualization of Saccade Trajectories,
CVAA16(I: 783-797).
Springer DOI
1611
BibRef
Xu, M.[Mai],
Ren, Y.[Yun],
Wang, Z.[Zulin],
Learning to Predict Saliency on Face Images,
ICCV15(3907-3915)
IEEE DOI
1602
Face. Based on eye tracking data over face images.
BibRef
Chi, H.Y.[Heng-Yu],
Cheng, W.H.[Wen-Huang],
You, C.W.[Chuang-Wen],
Chen, M.S.[Ming-Syan],
What Catches Your Eyes as You Move Around? On the Discovery of
Interesting Regions in the Street,
MMMod16(I: 767-779).
Springer DOI
1601
BibRef
Borji, A.[Ali],
Tavakoli, H.R.[Hamed R.],
Sihite, D.N.[Dicky N.],
Itti, L.[Laurent],
Analysis of Scores, Datasets, and Models in Visual Saliency
Prediction,
ICCV13(921-928)
IEEE DOI
1403
eye movements, model benchmarking; saliency; visual attention
BibRef
Riche, N.[Nicolas],
Duvinage, M.[Matthieu],
Mancas, M.[Matei],
Gosselin, B.[Bernard],
Dutoit, T.[Thierry],
Saliency and Human Fixations:
State-of-the-Art and Study of Comparison Metrics,
ICCV13(1153-1160)
IEEE DOI
1403
Human eye fixations; Metrics; Saliency maps; Validation
BibRef
Martinez, F.,
Carbone, A.,
Pissaloux, E.,
Combining first-person and third-person gaze for attention
recognition,
FG13(1-6)
IEEE DOI
1309
behavioural sciences
BibRef
Fritz, G.[Gerald],
Paletta, L.[Lucas],
Semantic analysis of human visual attention in mobile eye tracking
applications,
ICIP10(4565-4568).
IEEE DOI
1009
BibRef
Liang, Z.[Zhen],
Fu, H.[Hong],
Chi, Z.[Zheru],
Feng, D.D.[David Dagan],
Refining a region based attention model using eye tracking data,
ICIP10(1105-1108).
IEEE DOI
1009
BibRef
Doshi, A.[Anup],
Trivedi, M.M.[Mohan M.],
Attention estimation by simultaneous observation of viewer and view,
CVPR4HB10(21-27).
IEEE DOI
1006
BibRef
Earlier:
Head and gaze dynamics in visual attention and context learning,
VCL-ViSU09(77-84).
IEEE DOI
0906
BibRef
Nataraju, S.[Sunaad],
Balasubramanian, V.[Vineeth],
Panchanathan, S.[Sethuraman],
Learning attention based saliency in videos from human eye movements,
WMVC09(1-6).
IEEE DOI
0912
BibRef
Kawato, S.J.[Shin-Jiro],
Utsumi, A.[Akira],
Abe, S.J.[Shin-Ji],
Gaze Direction Estimation with a Single Camera Based on Four Reference
Points and Three Calibration Images,
ACCV06(I:419-428).
Springer DOI
0601
BibRef
Earlier: A2, A1, A3:
Attention Monitoring Based on Temporal Signal-Behavior Structures,
CVHCI05(100).
Springer DOI
0601
BibRef
Gu, E.[Erdan],
Wang, J.B.[Jing-Bin],
Badler, N.I.,
Generating Sequence of Eye Fixations Using Decision-theoretic Attention
Model,
AttenPerf05(III: 92-92).
IEEE DOI
0507
BibRef
Soliman, M.,
Tavakoli, H.R.,
Laaksonen, J.,
Towards gaze-based video annotation,
IPTA16(1-5)
IEEE DOI
1703
computer vision
BibRef
Liu, H.Y.[Hui-Ying],
Xu, D.[Dong],
Huang, Q.M.[Qing-Ming],
Li, W.[Wen],
Xu, M.[Min],
Lin, S.[Stephen],
Semantically-Based Human Scanpath Estimation with HMMs,
ICCV13(3232-3239)
IEEE DOI
1403
Attention; Gaze shift; Hidden Markov Model; Levy flight; Saliency
BibRef
Park, H.S.[Hyun Soo],
Jain, E.[Eakta],
Sheikh, Y.[Yaser],
Predicting Primary Gaze Behavior Using Social Saliency Fields,
ICCV13(3503-3510)
IEEE DOI
1403
Gaze prediction; Social scene understanding
BibRef
Shi, T.[Tao],
Sugimoto, A.[Akihiro],
Video Saliency Modulation in the HSI Color Space for Drawing Gaze,
PSIVT13(206-219).
Springer DOI
1402
BibRef
Yun, K.[Kiwon],
Peng, Y.F.[Yi-Fan],
Samaras, D.[Dimitris],
Zelinsky, G.J.[Gregory J.],
Berg, T.L.[Tamara L.],
Studying Relationships between Human Gaze, Description, and Computer
Vision,
CVPR13(739-746)
IEEE DOI
1309
BibRef
Leifman, G.,
Rudoy, D.[Dmitry],
Swedish, T.,
Bayro-Corrochano, E.,
Raskar, R.,
Learning Gaze Transitions from Depth to Improve Video Saliency
Estimation,
ICCV17(1707-1716)
IEEE DOI
1802
convolution, gaze tracking, image colour analysis,
learning (artificial intelligence), neural nets,
Visualization
BibRef
Rudoy, D.[Dmitry],
Goldman, D.B.[Dan B.],
Shechtman, E.[Eli],
Zelnik-Manor, L.[Lihi],
Learning Video Saliency from Human Gaze Using Candidate Selection,
CVPR13(1147-1154)
IEEE DOI
1309
BibRef
Chi, C.[Chen],
Qing, L.Y.[Lai-Yun],
Miao, J.[Jun],
Chen, X.L.[Xi-Lin],
Evaluation of the Impetuses of Scan Path in Real Scene Searching,
Gaze10(450-459).
Springer DOI
1109
BibRef
Yamada, K.[Kentaro],
Sugano, Y.[Yusuke],
Okabe, T.[Takahiro],
Sato, Y.[Yoichi],
Sugimoto, A.[Akihiro],
Hiraki, K.[Kazuo],
Attention Prediction in Egocentric Video Using Motion and Visual
Saliency,
PSIVT11(I: 277-288).
Springer DOI
1111
BibRef
Earlier:
Can Saliency Map Models Predict Human Egocentric Visual Attention?,
Gaze10(420-429).
Springer DOI
1109
BibRef
Hoque, M.M.[Mohammed Moshiul],
Onuki, T.[Tomami],
Tsuburaya, E.[Emi],
Kobayashi, Y.[Yoshinori],
Kuno, Y.[Yoshinori],
Sato, T.[Takayuki],
Kodama, S.[Sachiko],
An Empirical Framework to Control Human Attention by Robot,
Gaze10(430-439).
Springer DOI
1109
BibRef
Filip, J.[Jiri],
Haindl, M.[Michal],
Chantler, M.J.[Michael J.],
Gaze-Motivated Compression of Illumination and View Dependent Textures,
ICPR10(862-865).
IEEE DOI
1008
BibRef
Davies, S.J.C.,
Agrafiotis, D.,
Canagarajah, C.N.,
Bull, D.R.,
A gaze prediction technique for open signed video content using a track
before detect algorithm,
ICIP08(705-708).
IEEE DOI
0810
BibRef
Chapter on Active Vision, Camera Calibration, Mobile Robots, Navigation, Road Following continues in
Hand-Eye Coordination .