The heterogeneity of the more than 1.3% of Americans who suffer from severe physical impairments (SPIs)
preclude the use of common augmentative or alternative communication (AAC) solutions such as manual signs,
gestures or dexterous interaction with a touchscreen for communication. While efforts to develop alternative
access methods through eye or head tracking have provided some communication advancements for these
individuals, all current technologies suffer from the same fundamental limitation: existing AAC devices require
patients to conform to generic communication access methods and interfaces rather than the device conforming
to the user. Consequently, AAC users are forced to settle for interventions that require excessive training and
cognitive workload only to deliver extremely slow information transfer rates (ITRs) and recurrent
communication errors that ultimately deprive them of the fundamental human right of communication. To meet
this health need, we propose the first smart-AAC system designed using individually adaptive access methods
and AAC interfaces to accommodate the unique manifestations of motor impairments specific to each user.
Preliminary research by our team of speech researchers at Madonna Rehabilitation Hospital (Communication
Center Lab) and Boston University (STEPP Lab), utilizing wearable sensors developed by our group (Altec, Inc)
have already demonstrated that metrics based on surface electromyographic (sEMG) and accelerometer
measures of muscle activity and movement for head-mediated control can be combined with optimizable AAC
interfaces to improve ITRs when compared with traditional unoptimized AAC devices. Leveraging this pilot
work, our team is now proposing a Phase I project to demonstrate the proof-of-concept that a single sEMG/IMU
hybrid sensor worn on the forehead can provide improvements in ITR and communication accuracy when
integrated with an AAC interface that is optimized through machine learning algorithms. The prototype system
will be tested and compared to a conventional (non-adaptable) interface in subjects with SPI at a collaborative
clinical site. Assistance by our speech and expert-AAC collaborators will ensure that all phases of technology
development are patient-centric and usable in the context of clinical care. In Phase II we will build upon this
proof-of-concept to design a smart-AAC system with automated optimization software that achieves dynamic
learning which adapts to intra-individual changes in function through disease progression or training as well as
inter-individual differences in motor impairments for a diverse set of users with spinal cord injury, traumatic
brain injury, cerebral palsy, ALS, and other SPIs. The innovation is the first and only AAC technology that
combines advancements in wearable-sensor access with interfaces that are autonomously optimized to the user,
thereby reducing the resources and training needed to achieve effective person-centric communication in SPI,
through improved HMI performance and reduced workload.