Project Summary
Every day we rely on our vision to judge the absolute distances of objects around us to plan and guide our
actions, such as walking and driving. This, wayfinding, process of ascertaining one’s position and planning for
possible routes of actions cannot be accomplished without reliable perception of visual space in the intermediate
distance range (~2-25m from the observer). Thus, the broad long-term objective of this project is to uncover the
mechanisms underlying intermediate distance space perception that supports distance judgment.
Yet, less is known about the underlying mechanisms of intermediate distance space perception compared to
those of near space perception (<2m). Moreover, extant knowledge is predominantly obtained from testing static
observers, making it difficult to generalize to the more common situation where observers plan and execute self-
motion. The latter situation is more complex because self-motion is accompanied by retinal image motion of
static objects in the surrounding environment, potentially requiring the visual system to simultaneously track the
locations of all objects in the environment. The visual system also requires more processing capacity because
it has to simultaneously compute the visual space representation, explore the environment, implement motor
controls, etc. Clearly, both challenges – coding complexity and capacity limitation – could pose as potential
threats to our ability to efficiently judge absolute distances and implement actions. We hypothesize the visual
system overcomes both challenges by: (a) spatially updating the moving observer’s position using an allocentric,
world-centered spatial coordinate system for representing visual space, and (b) use spatial working memory
(spatial-image) during spatial updating. We will investigate both hypotheses in three specific aims.
Aim 1: Investigate the implementation of the allocentric, world-centered spatial coordinate system
Aim 2: Investigate the factors affecting the spatial updating of visual space
Aim 3: Investigate the role of spatial-image memory in visual space perception
Our psychophysical experiments will measure human behavioral responses in the real 3D environment. This
approach allows us to understand how our natural ecological niche, namely, the ground surface, both constrains
and supports space perception and action in the real world. We will test human observers’ ability to judge target
locations in impoverished visual environments under various conditions, such as while manipulating the
observers’ cognitive load (attention and memory), or available visual and idiothetic (vestibular and
proprioception) information, while they plan and/or execute self-motion (walking). The outcomes of this research
will advance the space perception literature, bridge theoretical knowledge of visual space perception and
memory-directed navigation (cognitive maps), as well as reveal the influence of vestibular and somatosensory
signals. In turn, the theoretical advancements provide insights for better understanding of intermediate distance
space perception related to eye and visual impairments and their impacts on mobility in the real 3D environment.