CRCNS Research Proposal: Collaborative Research: Multimodal
Dynamic Causal Learning for Neuroimaging
A Project Description
A.1 Introduction
Many analyses of fMRI and other neuroimaging data aim to discover the underlying causal or commu-
nication structures that generated that activity.1,2 An accurate characterization of these brain structures
is important for understanding neural circuits, systems-level neuroscience, and the neural bases of var-
ious cognitive psychological phenomena or mental diseases. Brain structures learned from neuroimag-
ing data also provide a powerful diagnostic tool for predicting everything from the concepts currently in
one’s mind3–5 to whether one suffers from different mental diseases.6–10
Given the importance of such brain connectivity networks, it is unsurprising that a wide variety of
learning algorithms have been developed for different neuroimaging modalities. In particular, many of
these methods aim to infer the underlying causal or connectivity networks from data (in contrast with
model comparison methods such as dynamic causal modeling (DCM)11). These network discovery
methods have achieved some notable successes,2,6,12–21 but have also largely failed to address two
issues that can impede our ability to achieve improved understanding of the full, working brain. Our
project will develop, validate, and apply methods that solve both of these challenges.
First, existing brain connectivity inference methods can be roughly divided into two groups: static me-
thods that do not actually treat the brain as a dynamic system (e.g., IMaGES22 and most of the ap-
proaches tested by Smith et al.23); and dynamic methods that explicitly measure and model the dy-
namics of the brain. Static methods obviously fail to use all of the available information. In contrast,
dynamic methods use the full structure of the measurements, but essentially all such methods24–28
infer causal and connectivity structures at the timescale of the measurement modality, rather than at
the brain’s causal timescale. However, the networks learned from data at the measurement and brain
timescales can be quite different, even given solutions for all of the other statistical and measurement
problems facing neuroimaging analysis.29 Moreover, the important facts about causal or connectivity
structure are frequently about which brain regions communicate directly with which other brain re-
gions, which requires a focus on the brain timescale, not the measurement modality timescale. It is
thus scientifically critical that we have methods that can determine the causal connections that exist
at the timescale of the underlying neural systems, not just those that are found at the timescale of our
particular neuroimaging methods.
Second, there are multiple neuroimaging measurement modalities, each with their own strengths
and weaknesses. There are obvious and widely recognized advantages of multimodal information
fusion: 1) access to multiple, richer datasets, larger sample sizes, and improved estimation quality;
2) improved spatial coverage of the brain compared to fast dynamic modalities alone; 3) improved
dynamic coverage of the range of signals that are informative about interactions of brain networks; and
4) enhanced estimation quality and reductions in modality-specific deficiencies due to complementary
aspects of different modalities. These advantages are heavily exploited for feature and representation
learning; our group has been highly active in this field.10,30–37 However, to our knowledge, no methods
have been developed and validated that can learn causal information (effective connectivity) using data
from multiple modalities. There exist machine learning methods (developed by members of our group)
that can combine causal information from disparate datasets,38–41 but these methods have never been
applied to multimodal neuroimaging data. Moreover, use of these methods to combine spatially precise
(e.g. fMRI) and dynamically precise (e.g. EEG, MEG) modalities requires a theory of the differences in
1