Adaptive Testing of Cognitive Function based on multi-dimensional Item Response Theory - Project Abstract With the aging of the American population, the number of older adults at risk for developing cognitive impairment is staggering. Recent research points to age-related change in cognitive performance beginning as early as age 30, highlighting the potential for early interventions. Cognitive function has long been assessed using standardized cognitive tasks administered via neuropsychological evaluation. However, the traditional way to assess cognitive ability is time consuming, requires trained personnel, requires an office visit, and identifying decline among younger adults is particularly challenging because it can be masked by item redundancy effects. Here we propose developing a new computerized adaptive test (CAT) to assess cognitive function, either in clinic or remotely, that is based on recent advances in multidimensional item response theory (MIRT). We are calling it the CAT-COG. The CAT-COG will assess global cognitive ability as a primary domain as well as 5 cognitive subdomains: episodic memory, language/semantic memory, processing speed, attentional control/working memory, flexible cognition/reasoning. Our approach will revolutionize computerbased cognitive testing (ultimately in a platform independent way), providing precise estimation of an individual's ability on these domains with minimal respondent burden, using a sufficienUy large bank of items so that the same individual's cognitive ability can be assessed repeatedly without reusing items or stimuli. This project (resubmission) brings together an accomplished interdisciplinary team of researchers and also builds on the unique resources of the Rush Alzheimer's Disease Center (RADC). A portion of the original Aim 1 of the grant that involved development of a new 500 item bank of cognitive tasks, data collection, bifactor model calibration and simulated adaptive testing has now been independently funded by an NIA R56 award (2 years of what we proposed to complete in 3 years). These are the key remaining project steps: (1) We will expand our Ml RT-based model calibration to include a wide variety alternative IRT/MIRT models (bifactor, unrestricted MIRT, domain-specific IRT, TesUet and trifactor models) and select the best fitting model(s) for the final CATCOG. Note that different models may be used for the global and domain specific tests. (2) We will validate the CAT-COG among returning RADC participants who will also receive traditional neuropsychological testing. (3) We will study short-term variability of the CAT-COG to determine learning effects, develop a testing protocol that is immune to such effects, and assess test-retest reliability. (4) We will harmonize the CAT-COG with the RADC standard test battery so that existing data can be linked to newly collected CAT-COG assessments. (5) Assess differential item functioning to detect possible bias as a function of age, race, sex, and education.