Visual sensory substitution devices, in which images from a video camera are converted to a cross-modal signal
that is then presented to the subjects in place of direct visual information, offer an alternative, non-surgical
approach for the blind to appreciate aspects of their immediate environment. These non-invasive, low-cost
technologies can also offer higher visual acuity and wider fields of view than currently available retinal and
cortical implants. However, sensory substitution devices are rarely adopted to interpret and translate complex,
natural environments for daily use by blind individuals partly because of their impracticality. In addition, little is
known about the perceptual and behavioral consequences of delivering new patterns of information as an
alternative sense when the visual system is damaged at different ages. The metabolic and functional processes
that allow the deprived human visual cortex for enhancing cross-modal plasticity have not been clearly defined
either. These research gaps must be filled before sensory substitution can be exploited as a method of vision
rehabilitation for improving function and independence while reducing the associated costs of blindness. The
goal of this project is to develop and refine sensory substitution technologies and to identify the determinants of
cross-modal plasticity in brains deprived of visual input in order to facilitate sensory substitution. To achieve this
goal, we will incorporate artificial intelligence (AI) to simplify images of complex, natural environments into
isolated objects for sensory substitution. We will then use magnetic resonance imaging (MRI) and spectroscopy
(MRS), combined with behavioral assessments to examine the neural substrates of sensory substitution in early
and late blind subjects. We will leverage tactile and auditory substitution stimuli to determine how changes in
neurochemicals reflect the cross-modal activity of visually deprived brains. We will also test how training with AI
can help blind individuals interpret visual environmental cues in a more meaningful way. Feedback from the
behavioral and neurobiological results will help refine our sensory substitution devices. Aim 1: To improve
sensory substitution toward practical use, we will use AI and 3D coding to convert complex everyday scenarios
into simplified and aesthetically engaging versions that aid both immediate task performance and sensory
substitution training. Aim 2: To determine how visual processing pathways in the brain adapt to vision loss, we
will employ advanced MRI and MRS to unveil the structural, metabolic, and functional brain properties in
participants with different onset ages and durations of blindness before training. Aim 3: To determine the effects
of sensory substitution training on visual processing pathways and performance of the blind, we will use
advanced MRI and MRS to determine if cross-modal perceptual learning alters the deprived visual cortex toward
a more plastic state via modulating its excitatory-inhibitory balance and choline levels. We will also examine the
neurobehavioral changes that occur when identifying and locating the objects of interest using auditory or tactile
stimuli with and without AI-assisted image deconstruction.