Project Summary
As part of the Research Domain Criteria (RDoC) initiative, the NIMH seeks to improve measures of neuronal
and psychological targets for use in intervention research. RDoC tools must precisely measure cognitive and
neuronal systems to produce reliable findings. Unfortunately, many RDoC tasks have not been
psychometrically evaluated, nor refined, and are liable to confounds that can lead to inaccurate claims of
differential deficit, weakened effect size, and contradictory brain imaging findings. As highlighted by FOA PAR-
18-930, there is a critical need for modern psychometric methods and tools designed to support cognitive and
clinical neuroscience research. This project responds to this FOA by evaluating and refining a methodology
designed to administer statistically robust variants of RDoC tasks, especially in the context of brain imaging.
We have created a quantitative methodology designed to administer computerized adaptive tests (CATs) in the
context of cognitive and clinical neuroscience research. CATs manipulate stimulus properties in real time in
order to improve measurement precision, avoid ceiling and floor effects, and maximize effect size, even for
individuals and groups with highly discrepant levels of cognitive functioning. CATs also perform psychometric
adjustments to cognitive tasks so that brain functioning abnormalities can be interpreted independent of
performance deficits. In pilot work, we have used this approach with an RDoC working memory task, the N-
back, to show that the methodology improves the reliability of both cognitive and brain imaging data. We will
evaluate the generalizability and impact of adaptive testing, beyond the N-back task, for use in translational
and experimental testing. Patients with schizophrenia and controls will be administered both adaptive and non-
adaptive versions of four RDoC paradigms used to assess working memory: delayed match-to-sample,
Sternberg, self-ordered pointing, and N-back. Additionally, participants will be administered the 5-Choice
Continuous Performance Task and Probabilistic Learning Task, translational measures of control and learning
respectively. Both groups will undergo functional neuroimaging and respond to adaptive versions of these
tasks. The specific aims are to: (1) Determine whether adaptive testing improves the precision and effect size
estimates of performance differences produced by RDoC tasks; and (2) Determine whether adaptive testing
improves the reliability and effect size estimates of brain activation differences produced by RDoC tasks. While
these aims are designed to evaluate a methodology, and to address critical concerns related to the use of
RDoC working memory tasks as neural probes, mediators, and outcomes in psychosis research, results also
have broad implications across populations, brain regions and networks, and cognitive domains. By addressing
concerns of poor reliability, weak effect size, and brain activation confounds, this project will show that adaptive
testing broadly improves cognitive neuroscience tasks. To facilitate rapid deployment of adaptive RDoC tasks,
we will develop freely available versions of the paradigms used and a companion R package ‘catCog’.