PROJECT SUMMARY
When we interact with objects using our hands, we are able to easily distinguish between our keys and our
phone, and can do so even without visual cues. This ability to sense the three-dimensional structure of an object
through haptic exploration alone is termed stereognosis and relies on the integration of two distinct streams of
sensory information: tactile signals from the fingertips contacting the object relay information about local features
(e.g., edge location, curvature, texture), and proprioceptive information from the muscles relay information about
the overall shape and size of the object. While the integration of tactile and proprioceptive signals has been
observed at higher order stages of somatosensory processing (Brodmann’s area 2, secondary somatosensory
cortex, parietal ventral area), the neural mechanisms underlying this integration remain largely unknown. Given
that the hand is a highly deformable sensory sheet, there are likely unknown neural processing mechanisms
unique to the somatosensory system that underlie this integration process and a new framework will be
necessary to understand how stereognosis can arise. The goal of the present study is to better understand the
principles of multimodal integration that give rise to stereognosis by characterizing the responses of multimodal
neurons in area 2 during grasping (Aim 1) and by developing computational models of how tactile and
proprioceptive signals are integrated to give rise to object representations that are independent of how objects
are grasped (Aim 2). We anticipate that the computational models will inform the interpretation of our
neurophysiological results and deep novel insights into the neural mechanisms of stereognosis.
Not only will the results of the study contribute to basic science, but they will also have implications for
translational research and clinical applications. Our study of neural coding along the primate neuraxis informs
our work toward more dexterous brain-controlled prostheses, which involves inferring motor intent but also
restoring sensory feedback. Indeed, our ability to dexterously interact with objects, even without vision, depends
on neural representations of objects. We anticipate that a deeper understanding of object representations in
higher order somatosensory cortices, including area 2, will allow us to leverage these representations to improve
the informativeness of intracortical microstimulation-based somatosensory feedback, thereby conferring greater
dexterity to the brain-controlled bionic hands.