Gaze-contingent computer screen magnification control for people with low vision - !
Project Summary
This application describes proposed research with the goal of facilitating use of a computer screen magnifier by people
with low vision. Screen magnification is a well-established, popular technology for access of onscreen content. Its
main shortcoming is that it requires the user to continuously control, with the mouse or trackpad, the location of the
focus of magnification, in order to ensure that the magnified content of interest is within the screen viewport. This
tedious process may be time-consuming and ineffective. For example, the simple task of reading the news on a web
site requires continuous horizontal scrolling, which affects the experience of using this otherwise very beneficial
technology, and may discourage its use, especially by those with poor manual coordination.
We propose to develop a software system that enables hands-free control of a screen magnifier. This system will
rely on the user’s eye gaze (measured by a regular IR-based tracker, or from analysis of the images in a camera
embedded in the screen) to update the location of the focus of magnification as desired. This research is inspired by
preliminary work, which showed promising results with two simple gaze-based control algorithms, tested on three
individuals with low vision.
This project will be a collaboration between the Department of Computer Science and Engineering at UC Santa
Cruz (PI: Manduchi, Co-I: Prado) and the School of Optometry at UC Berkeley (PI: Chung). Dr. Legge from the
Department of Psychology at U. Minnesota will participate as a consultant. Two human subjects studies are planned.
In Study 1 with 80 low vision subjects from four different categories of visual impairment, we will investigate the failure
rate of a commercial gaze tracker (Aim 1), and will record mouse tracks, gaze tracks, and images from the subjects
while performing a number of tasks using two modalities of screen magnification (Aim 2). In Study 2, with the same
number of subjects, we will repeat the Study 1 experiment, but using a gaze-based controller trained from the data
collected in Study 1, and individually tunable for best performance (Aim 3). In addition, we will experiment with an
appearance-based gaze tracker that uses images from the screen camera, thereby removing the need for specialized
gaze tracking hardware, as well as with a computer tablet form factor (Aim 4). We expect that reading speed and
error rate using our gaze-based controller will be no worse than using mouse-based control. If successful, this study
will show that the convenience of hands-free control offered by the proposed system comes at no additional cost in
terms of individual performance at the considered tasks.
!