Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GazePrompt: Enhancing Low Vision People's Reading Experience with Gaze-Aware Augmentations (2402.12772v2)

Published 20 Feb 2024 in cs.HC

Abstract: Reading is a challenging task for low vision people. While conventional low vision aids (e.g., magnification) offer certain support, they cannot fully address the difficulties faced by low vision users, such as locating the next line and distinguishing similar words. To fill this gap, we present GazePrompt, a gaze-aware reading aid that provides timely and targeted visual and audio augmentations based on users' gaze behaviors. GazePrompt includes two key features: (1) a Line-Switching support that highlights the line a reader intends to read; and (2) a Difficult-Word support that magnifies or reads aloud a word that the reader hesitates with. Through a study with 13 low vision participants who performed well-controlled reading-aloud tasks with and without GazePrompt, we found that GazePrompt significantly reduced participants' line switching time, reduced word recognition errors, and improved their subjective reading experiences. A follow-up silent-reading study showed that GazePrompt can enhance users' concentration and perceived comprehension of the reading contents. We further derive design considerations for future gaze-based low vision aids.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (113)
  1. User-centered design. Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications 37, 4 (2004), 445–456.
  2. Carlos AGUILAR and Eric CASTET. 2012. Use Of A Gaze-Contingent Augmented-Vision Aid To Improve Reading With Central Feild Loss. Investigative Ophthalmology & Visual Science 53, 14 (2012), 4390–4390.
  3. Carlos Aguilar and Eric Castet. 2017. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people. PLoS One 12, 4 (2017), e0174910.
  4. Towards predicting reading comprehension from gaze behavior. In ACM Symposium on Eye Tracking Research and Applications. 1–5.
  5. Sonia J Ahn and Gordon E Ledge. 1995. Psychophysics of reading—XIII. Predictors of magnifier-aided reading speed in low vision. Vision Research 35, 13 (1995), 1931–1938.
  6. Current challenges and applications for adaptive user interfaces. Human-Computer Interaction (2009), 49–68.
  7. Apple. 2022. How to zoom in or out on Mac. Available online at: https://support.apple.com/guide/iphone/zoom-iph3e2e367e/ios, last accessed on 8/24/2023.
  8. Towards making videos accessible for low vision screen magnifier users. In Proceedings of the 25th international conference on intelligent user interfaces. 10–21.
  9. Applications of augmented reality in ophthalmology. Biomedical optics express 12, 1 (2021), 511–538.
  10. Jeffrey P. Bigham. 2014. Making the Web Easier to See with Opportunistic Accessibility Improvement. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST ’14). Association for Computing Machinery, New York, NY, USA, 117–122. https://doi.org/10.1145/2642918.2647357
  11. James V Bradley. 1958. Complete counterbalancing of immediate sequential effects in a Latin square design. J. Amer. Statist. Assoc. 53, 282 (1958), 525–528.
  12. Hugo Bruggeman and Gordon E Legge. 2002. Psychophysics of reading. XIX. Hypertext search and retrieval with low vision. Proc. IEEE 90, 1 (2002), 94–103.
  13. Algorithms for the automated correction of vertical drift in eye-tracking data. Behavior Research Methods 54, 1 (2022), 287–310.
  14. A visual interactive reading system based on eye tracking technology to improve digital reading performance. The Electronic Library 37, 4 (2019), 680–702.
  15. Gaze-Based Annotations for Reading Comprehension. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 1569–1572. https://doi.org/10.1145/2702123.2702271
  16. Relationship between slow visual processing and reading speed in people with macular degeneration. Vision research 47, 23 (2007), 2943–2955.
  17. Relationship between visual span and reading performance in age-related macular degeneration. Vision research 48, 4 (2008), 577–588.
  18. Anustup Choudhury and Gérard Medioni. 2010. Color contrast enhancement for visually impaired people. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, 33–40.
  19. Jacob Cohen. 2013. Statistical power analysis for the behavioral sciences. Routledge.
  20. A large-scaled corpus for assessing text readability. Behavior Research Methods (2022), 1–17.
  21. RSVP on the Go: Implicit Reading Support on Smart Watches through Eye Tracking. In Proceedings of the 2016 ACM International Symposium on Wearable Computers (Heidelberg, Germany) (ISWC ’16). Association for Computing Machinery, New York, NY, USA, 116–119. https://doi.org/10.1145/2971763.2971794
  22. Useful but Distracting: Keyword Highlights and Time-Synchronization in Captions for Language Learning. arXiv preprint arXiv:2307.05870 (2023).
  23. An aligned rank transform procedure for multifactor contrast tests. In The 34th Annual ACM Symposium on User Interface Software and Technology. 754–768.
  24. eSight. 2019. eSight - Electronic eyewear for the visually impaired. Available online at: https://www.esighteyewear.com, last accessed on 9/9/2023.
  25. Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design. In Proceedings of the 2017 Chi conference on human factors in computing systems. 1118–1130.
  26. New visual acuity charts for clinical research. American journal of ophthalmology 94, 1 (1982), 91–96.
  27. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology 32, 3 (1948), 221.
  28. Rudolf Flesch. 1979. How to write plain English. University of Canterbury. Available at http://www. mang. canterbury. ac. nz/writing_guide/writing/flesch. shtml.[Retrieved 5 February 2016] (1979).
  29. Using augmented reality to cue obstacles for people with low vision. Optics Express 31, 4 (2023), 6827–6848.
  30. Predictability and accuracy in adaptive user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1271–1274.
  31. Text highlighting improves user experience for reading with magnified displays. In CHI’11 Extended Abstracts on Human Factors in Computing Systems. 1891–1896.
  32. Miguel Grinberg. 2018. Flask-SocketIO documentation. Available online at: https://flask-socketio.readthedocs.io/en/latest/, last accessed on 9/10/2023.
  33. Wei Guo and Shiwei Cheng. 2019. An approach to reading assistance with eye tracking data and text features. In Adjunct of the 2019 International Conference on Multimodal Interaction. 1–7.
  34. Beneficial effects of spatial remapping for reading with simulated Central Field Loss. Investigative Opthalmology & Visual Science 59, 2 (2018), 1105. https://doi.org/10.1167/iovs.16-21404
  35. Elyse C Hallett. 2015. Reading without bounds: How different magnification methods affect the performance of students with low vision. California State University, Long Beach.
  36. Hannah Harvey and Robin Walker. 2014. Reading with peripheral vision: A comparison of reading dynamic scrolling and static text with a simulated Central Scotoma. Vision Research 98 (May 2014), 54–60. https://doi.org/10.1016/j.visres.2014.03.009
  37. GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos. In The 34th Annual ACM Symposium on User Interface Software and Technology. 769–782.
  38. Bill Holton. 2014. A review of iOS access for all: Your comprehensive guide to accessibility for iPad, iPhone, and iPod touch, by Shelly Brisbin. AccessWorld Magazine 15, 7 (2014).
  39. An augmented reality sign-reading assistant for users with reduced vision. PloS one 14, 1 (2019), e0210630.
  40. Model-based adaptive user interface based on context and user experience evaluation. Journal on Multimodal User Interfaces 12 (2018), 1–16.
  41. Alex D Hwang and Eli Peli. 2014. An augmented-reality edge enhancement application for Google Glass. Optometry and vision science: official publication of the American Academy of Optometry 91, 8 (2014), 1021.
  42. Design Issues of IDICT: A Gaze-Assisted Translation Aid. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (Palm Beach Gardens, Florida, USA) (ETRA ’00). Association for Computing Machinery, New York, NY, USA, 9–14. https://doi.org/10.1145/355017.355019
  43. Albrecht Werner Inhoff. 1984. Two stages of word processing during eye fixations in the reading of prose. Journal of verbal learning and verbal behavior 23, 5 (1984), 612–624.
  44. Eye movement training and suggested gaze strategies in tunnel vision-a randomized and controlled pilot study. PLoS One 11, 6 (2016), e0157825.
  45. Hyun Joon Jung and Matthew Lease. 2011. Improving consensus accuracy via z-score and weighted voting. In Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence. Citeseer.
  46. Gazemarks: Gaze-Based Visual Placeholders to Ease Attention Switching. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI ’10). Association for Computing Machinery, New York, NY, USA, 2093–2102. https://doi.org/10.1145/1753326.1753646
  47. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical Report. Naval Technical Training Command Millington TN Research Branch.
  48. Improving the accuracy of gaze input for interaction. In Proceedings of the 2008 symposium on Eye tracking research & applications. 65–68.
  49. lmerTest package: tests in linear mixed effects models. Journal of statistical software 82, 13 (2017).
  50. Florian Lang and Tonja Machulla. 2021. Pressing a button you cannot see: evaluating visual designs to assist persons with low vision through augmented reality. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology. 1–10.
  51. ChromaGlasses: Computational glasses for compensating colour blindness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.
  52. What is low vision? A re-evaluation of definitions. Optometry and Vision Science 76, 4 (1999), 198–211.
  53. VRDoc: Gaze-based Interactions for VR Reading Experience. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 787–796.
  54. Psychophysics of reading: XX. Linking letter recognition to reading speed in central and peripheral vision. Vision research 41, 6 (2001), 725–743.
  55. Psychophysics of reading—II. Low vision. Vision research 25, 2 (1985), 253–265.
  56. The impact of using eSight eyewear on functional vision and oculo-motor control in low vision patients. Investigative Ophthalmology & Visual Science 58, 8 (2017), 3267–3267.
  57. Tobias Lunte and Susanne Boll. 2020. Towards a gaze-contingent reading assistance for children with difficulties in reading. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–4.
  58. Gang Luo and Eli Peli. 2006. Use of an augmented-vision device for visual search by patients with Tunnel Vision. Investigative Opthalmology & Visual Science 47, 9 (2006), 4152. https://doi.org/10.1167/iovs.05-1672
  59. Random word recognition chart helps scotoma assessment in low vision. Optometry and Vision Science 92, 4 (2015), 421.
  60. Roberto Manduchi and Susana Chung. 2022. Gaze-Contingent Screen Magnification Control: A Preliminary Study. In International Conference on Computers Helping People with Special Needs. Springer, 380–387.
  61. SwitchBack: Using Focus and Saccade Tracking to Guide Users’ Attention for Mobile Task Resumption. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2953–2962. https://doi.org/10.1145/2702123.2702539
  62. Gaze-guided Magnification for Individuals with Vision Impairments. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
  63. G Harry Mc Laughlin. 1969. SMOG grading-a new readability formula. Journal of reading 12, 8 (1969), 639–646.
  64. Microsoft. 2022. Use Magnifier to make things on the screen easier to see. Available online at: https://support.microsoft.com/en-us/windows/use-magnifier-to-make-things-on-the-screen-easier-to-see-414948ba-8b1c-d3bd-8615-0e5e32204198#WindowsVersion=Windows_11, last accessed on 8/23/2022.
  65. An exploratory study of web adaptation techniques for people with low vision. Universal access in the information society 20, 2 (2021), 223–237.
  66. Robust eye contact detection in natural multi-person interactions using gaze and speaking behaviour. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. 1–10.
  67. Eye tracking for low vision aids-toward guiding of gaze. In International conference on computers for handicapped persons. Springer, 308–315.
  68. Mala D Naraine and Peter H Lindsay. 2011. Social inclusion of employees who are blind or low vision. Disability & Society 26, 4 (2011), 389–403.
  69. Improvement of reading speed after providing of low vision aids in patients with age-related macular degeneration. Acta ophthalmologica 87, 8 (2009), 849–853.
  70. NIH. 2020. Low Vision - National Eye Institute. Available online at: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/low-vision, last accessed on 9/9/2023.
  71. Word-level text highlighting of medical texts for telehealth services. Artificial Intelligence in Medicine 127 (2022), 102284.
  72. Cristian Pamparău and Radu-Daniel Vatavu. 2021. FlexiSee: flexible configuration, customization, and control of mediated and augmented vision for users of smart eyewear devices. Multimedia Tools and Applications (2021), 1–26.
  73. Predicting primary gaze behavior using social saliency fields. In Proceedings of the IEEE International Conference on Computer Vision. 3503–3510.
  74. Eli Peli. 2001. Vision multiplexing: an engineering approach to vision rehabilitation device development. Optometry and Vision Science 78, 5 (2001), 304–315.
  75. Gaze-touch: combining gaze with multi-touch for interaction on the same surface. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 509–518.
  76. Ken Pfeuffer and Hans Gellersen. 2016. Gaze and touch interaction on tablets. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 301–311.
  77. PalmGazer: Unimanual Eye-hand Menus in Augmented Reality. arXiv preprint arXiv:2306.12402 (2023).
  78. Krishnamachari S Prahalad and Daniel R Coates. 2020. Asymmetries of reading eye movements in simulated central vision loss. Vision research 171 (2020), 1–10.
  79. Tobii Pro. 2022a. Screen-based eye tracker for research — Tobii Pro Fusion. Available online at: https://www.tobiipro.com/product-listing/fusion/, last accessed on 9/7/2022.
  80. Tobii Pro. 2022b. Tobii Pro SDK. Available online at: https://www.tobiipro.com/product-listing/tobii-pro-sdk/, last accessed on 9/7/2022.
  81. Morten Højfeldt Rasmussen and Zheng-Hua Tan. 2013. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor: A Step in the Right Direction?. In SLaTE. ISCA, 112–115.
  82. Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological bulletin 124, 3 (1998), 372.
  83. Self-management programs for adults with low vision: needs and challenges. Patient education and counseling 69, 1-3 (2007), 39–46.
  84. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 conference on empirical methods in natural language processing. 193–203.
  85. NJ Rumney and SJ Leat. 1994. Why do low vision patients still read slowly with a low vision aid. Low vision: Research and new developments in rehabilitation (1994), 269–274.
  86. Johnny Saldaña. 2021. The coding manual for qualitative researchers. The coding manual for qualitative researchers (2021), 1–440.
  87. Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications. 71–78.
  88. Coco: Collaboration coach for understanding team dynamics during video conferencing. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 1, 4 (2018), 1–24.
  89. Bertrand Schneider and Roy Pea. 2017. Real-time mutual gaze perception enhances collaborative learning and collaboration quality. Educational Media and Technology Yearbook: Volume 40 (2017), 99–125.
  90. Freedom Scientific. 2023. Low Vision – Freedom Scientific. Available online at: https://www.freedomscientific.com/products/lowvision/, last accessed on 9/5/2023.
  91. CodeGazer: Making Code Navigation Easy and Natural With Gaze Input. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300306
  92. Mor Shamy and Dror G Feitelson. 2023. Identifying lines and interpreting vertical jumps in eye tracking studies of reading text and code. ACM Transactions on Applied Perception 20, 2 (2023), 1–20.
  93. The Reading Assistant: Eye Gaze Triggered Auditory Prompting for Reading Remediation. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (San Diego, California, USA) (UIST ’00). Association for Computing Machinery, New York, NY, USA, 101–107. https://doi.org/10.1145/354401.354418
  94. Meta Open Source. 2022. React - A JavaScript library for building user interfaces. Available online at: https://reactjs.org, last accessed on 9/7/2022.
  95. Design of an augmented reality magnification aid for low vision users. In Proceedings of the 20th international ACM SIGACCESS conference on computers and accessibility. 28–39.
  96. Finding a store, searching for a product: a study of daily challenges of low vision people. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 61–72.
  97. How people with low vision access computing devices: Understanding challenges and opportunities. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 171–180.
  98. Chroma: a wearable augmented-reality solution for color blindness. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing. 799–810.
  99. Improving screen magnification using the HyperBraille multiview windowing technique. In Computers Helping People with Special Needs: 12th International Conference, ICCHP 2010, Vienna, Austria, July14-16, 2010, Proceedings, Part II 12. Springer, 506–512.
  100. Roel Vertegaal. 1999. The GAZE groupware system: mediating joint attention in multiparty communication and collaboration. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 294–301.
  101. GAZE-2: An attentive video conferencing system. In CHI’02 extended abstracts on Human factors in computing systems. 736–737.
  102. Reading aids for adults with low vision. Cochrane Database of Systematic Reviews 4 (2018).
  103. Haofei Wang and Bertram E Shi. 2019. Gaze awareness improves collaboration efficiency in a collaborative assembly task. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–5.
  104. Understanding How Low Vision People Read Using Eye Tracking. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  105. WHO. 2019. World report on vision. Available online at: https://www.who.int/publications/i/item/9789241516570, last accessed on 9/9/2023.
  106. WHO. 2023. Blindness and vision impairment. Available online at: https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment, last accessed on 9/9/2023.
  107. Jen-Her Wu and Yufei Yuan. 2003. Improving searching and reading performance: the effect of highlighting and text color coding. Information & Management 40, 7 (2003), 617–637.
  108. SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300341
  109. Designing AR visualizations to facilitate stair navigation for people with low vision. In Proceedings of the 32nd annual ACM symposium on user interface software and technology. 387–402.
  110. The effectiveness of visual and audio wayfinding guidance on smartglasses for people with low vision. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
  111. ” It Looks Beautiful but Scary” How Low Vision People Navigate Stairs and Other Surface Level Changes. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. 307–320.
  112. Foresee: A customizable head-mounted vision enhancement system for people with low vision. In Proceedings of the 17th international ACM SIGACCESS conference on computers & accessibility. 239–249.
  113. CueSee: exploring visual cues for people with low vision to facilitate a visual search task. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 73–84.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ru Wang (23 papers)
  2. Zach Potter (1 paper)
  3. Yun Ho (3 papers)
  4. Daniel Killough (4 papers)
  5. Linxiu Zeng (2 papers)
  6. Sanbrita Mondal (4 papers)
  7. Yuhang Zhao (58 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets