Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction (2404.14232v3)
Abstract: Visual highlighting can guide user attention in complex interfaces. However, its effectiveness under limited attentional capacities is underexplored. This paper examines the joint impact of visual highlighting (permanent and dynamic) and dual-task-induced cognitive load on gaze behaviour. Our analysis, using eye-movement data from 27 participants viewing 150 unique webpages reveals that while participants' ability to attend to UI elements decreases with increasing cognitive load, dynamic adaptations (i.e., highlighting) remain attention-grabbing. The presence of these factors significantly alters what people attend to and thus what is salient. Accordingly, we show that state-of-the-art saliency models increase their performance when accounting for different cognitive loads. Our empirical insights, along with our openly available dataset, enhance our understanding of attentional processes in UIs under varying cognitive (and perceptual) loads and open the door for new models that can predict user attention while multitasking.
- Tobii AB. [n. d.]. Tobii Pro Lab. https://www.tobii.com/
- Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software 67, 1 (2015), 1–48. https://doi.org/10.18637/jss.v067.i01
- Cognitive cost of using augmented reality displays. IEEE transactions on visualization and computer graphics 23, 11 (2017), 2378–2388.
- What do different evaluation metrics tell us about saliency models? IEEE transactions on pattern analysis and machine intelligence 41, 3 (2018), 740–757.
- How are learning strategies reflected in the eyes? Combining results from self-reports and eye-tracking. British Journal of Educational Psychology 88, 1 (2018), 118–137.
- Qinyao Chang and Shiping Zhu. 2021. Temporal-spatial feature pyramid for video saliency detection. arXiv preprint arXiv:2105.04213 (2021).
- Siyuan Chen and Julien Epps. 2023. A High-Quality Landmarked Infrared Eye Video Dataset (IREye4Task): Eye Behaviors, Insights and Benchmarks for Wearable Mental State Analysis. IEEE Transactions on Affective Computing (2023).
- Towards a framework for attention cueing in instructional animations: Guidelines for research and design. Educational Psychology Review 21 (2009), 113–140.
- The Index of Pupillary Activity: Measuring Cognitive Load Vis-à-Vis Task Difficulty with Pupil Oscillation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173856
- AUIT – the Adaptive User Interfaces Toolkit for Designing XR Applications. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (Bend, OR, USA) (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 48, 16 pages. https://doi.org/10.1145/3526113.3545651
- Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA ’20 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 10, 11 pages. https://doi.org/10.1145/3379155.3391321
- Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 1118–1130. https://doi.org/10.1145/3025453.3025599
- Leah Findlater and Krzysztof Z. Gajos. 2009. Design space and evaluation challenges of adaptive graphical user interfaces. AI Magazine 30 (2009), 68–73. Issue 4. https://doi.org/10.1609/aimag.v30i4.2268
- Minimizing the Time to Search Visual Displays: The Role of Highlighting. Human Factors 31, 2 (1989), 167–182. https://doi.org/10.1177/001872088903100206 arXiv:https://doi.org/10.1177/001872088903100206 PMID: 2744770.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
- Kristian Tangsgaard Hvelplund. 2014. Eye tracking and the translation process: Reflections on the analysis and interpretation of eye-tracking data. (2014).
- Salicon: Saliency in context. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1072–1080.
- UEyes: Understanding Visual Saliency across User Interface Types. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.
- Learning to predict where humans look. In 2009 IEEE 12th international conference on computer vision. IEEE, 2106–2113.
- Arthur F Kramer. 2020. Physiological metrics of mental workload: A review of recent progress. Multiple task performance (2020), 279–328.
- Deepfix: A fully convolutional neural network for predicting human eye fixations. IEEE Transactions on Image Processing 26, 9 (2017), 4446–4456.
- Manu Kumar. 2007. GUIDe saccade detection and smoothing algorithm. Technical Rep. Stanford CSTR 3 (2007), 2007.
- Improving the accuracy of gaze input for interaction. In Proceedings of the 2008 symposium on Eye tracking research & applications. 65–68.
- Nilli Lavie. 2005. Distracted and confused?: Selective attention under load. Trends in cognitive sciences 9, 2 (2005), 75–82.
- Visual Complexity of Head-Up Display in Automobiles Modulates Attentional Tunneling. Human Factors (2023), 00187208231181496.
- Visual attention in driving: The effects of cognitive load and visual disruption. Human Factors 49, 4 (2007), 721–733.
- Understanding visual saliency in mobile user interfaces. In 22nd International conference on human-computer interaction with mobile devices and services. 1–12.
- Context-Aware Online Adaptation of Mixed Reality Interfaces. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 147–160. https://doi.org/10.1145/3332165.3347945
- Nian Liu and Junwei Han. 2018. A deep spatial contextual long-term recurrent convolutional network for saliency detection. IEEE Transactions on Image Processing 27, 7 (2018), 3264–3274.
- Learning to detect a salient object. IEEE Transactions on Pattern analysis and machine intelligence 33, 2 (2010), 353–367.
- Which emphasis technique to use? Perception of emphasis techniques with varying distractors, backgrounds, and visualization types. Information Visualization 21, 2 (2022), 95–129.
- Kyle Min and Jason J Corso. 2019. Tased-net: Temporally-aggregating spatial encoder-decoder network for video saliency detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2394–2403.
- Designing for Noticeability: Understanding the Impact of Visual Importance on Desktop Notifications. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 472, 13 pages. https://doi.org/10.1145/3491102.3501954
- Max Nicosia and Per Ola Kristensson. 2021. Design Principles for AI-Assisted Attention Aware Systems in Human-in-the-Loop Safety Critical Applications. In Engineering Artificially Intelligent Systems: A Systems Engineering Approach to Realizing Synergistic Capabilities. Springer, 230–246.
- Visual Highlighting on Public Displays. In Proceedings of the 2012 International Symposium on Pervasive Displays (Porto, Portugal) (PerDis ’12). Association for Computing Machinery, New York, NY, USA, Article 2, 6 pages. https://doi.org/10.1145/2307798.2307800
- Aalto Interface Metrics (AIM) A Service and Codebase for Computational GUI Evaluation. In Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 16–19.
- Geerd Philipsen. 1994. Effects of six different highlighting modes on visual search performance in menu options. International Journal of Human–Computer Interaction 6, 3 (1994), 319–335. https://doi.org/10.1080/10447319409526098 arXiv:https://doi.org/10.1080/10447319409526098
- R Core Team. 2022. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
- Esa M Rantanen and Joseph H Goldberg. 1999. The effect of mental workload on the visual field size and shape. Ergonomics 42, 6 (1999), 816–834.
- Tidying deep saliency prediction architectures. In 2020 IEEE. In RSJ International Conference on Intelligent Robots and Systems (IROS). 10241–10247.
- Impairing the useful field of view in natural scenes: Tunnel vision versus general interference. Journal of Vision 16, 2 (2016), 7–7.
- Anthony C. Robinson. 2011a. Highlighting in Geovisualization. Cartography and Geographic Information Science 38, 4 (2011), 373–383. https://doi.org/10.1559/15230406384373 arXiv:https://doi.org/10.1559/15230406384373
- Anthony C Robinson. 2011b. Highlighting in geovisualization. Cartography and Geographic Information Science 38, 4 (2011), 373–383.
- Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in psychology 3 (2012), 13.
- Feature congestion: a measure of display clutter. In Proceedings of the SIGCHI conference on Human factors in computing systems. 761–770.
- Red alert: a cognitive countermeasure to mitigate attentional tunneling. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–6.
- Expertise differences in attentional strategies related to pilot decision making. In Decision making in aviation. Routledge, 371–386.
- Chengyao Shen and Qi Zhao. 2014. Webpage saliency. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VII 13. Springer, 33–46.
- A scanner deeply: Predicting gaze heatmaps on visualizations using crowdsourced eye movement data. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2022), 396–406.
- Using linear mixed models to analyze data from eye-tracking research on subtitling. Translation Spaces 11, 1 (2022), 60–88.
- Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
- Peripheral vision and pattern recognition: A review. Journal of vision 11, 5 (2011), 13–13.
- Guidelines for effective usage of text highlighting techniques. IEEE transactions on visualization and computer graphics 22, 1 (2015), 489–498.
- Guidelines for Effective Usage of Text Highlighting Techniques. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2016), 489–498. https://doi.org/10.1109/TVCG.2015.2467759
- Anne M Treisman and Garry Gelade. 1980. A feature-integration theory of attention. Cognitive psychology 12, 1 (1980), 97–136.
- Oleg Špakov. 2012. Comparison of Eye Movement Filters Used in HCI. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). Association for Computing Machinery, New York, NY, USA, 281–284. https://doi.org/10.1145/2168556.2168616
- Kerri Walter and Peter Bex. 2021. Cognitive load influences oculomotor behavior in natural scenes. Scientific Reports 11, 1 (2021), 12405.
- Kerri Walter and Peter Bex. 2022. Low-level factors increase gaze-guidance under cognitive load: A comparison of image-salience and semantic-salience models. Plos one 17, 11 (2022), e0277691.
- A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos. Neurocomputing 287 (2018), 68–83.
- Christopher D Wickens and Amy L Alexander. 2009. Attentional tunneling and task management in synthetic vision displays. The international journal of aviation psychology 19, 2 (2009), 182–199.
- Richard Young. 2012. Cognitive distraction while driving: A critical review of definitions and prevalence in crashes. SAE International Journal of Passenger Cars-Electronic and Electrical Systems 5, 2012-01-0967 (2012), 326–342.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.