Habit learning supported by efficiently controlled network dynamics in naive macaque monkeys
Karol P. Szymula, Fabio Pasqualetti, Ann M. Graybiel, Theresa M. Desrochers, Danielle S. Bassett
HHabit learning supported by efficiently controllednetwork dynamics in naive macaque monkeys
Karol P. Szymula a , Fabio Pasqualetti b , Ann M. Graybiel c , Theresa M. Desrochers d,* , and Danielle S. Bassett a,e,f,g,h,i,* a Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA; b Department of Mechanical Engineering,University of California, Riverside, CA 92521 USA; c Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA; d Department of Neuroscience, Department of Psychiatry and Human Behavior, Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence RI02912 USA; e Department of Electrical & Systems Engineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA; f Department of Physics & Astronomy, College of Arts & Sciences, University of Pennsylvania, Philadelphia, PA 19104 USA; g Department of Neurology, Perelman School ofMedicine, University of Pennsylvania, Philadelphia, PA 19104 USA; h Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA19104 USA; i Santa Fe Institute, Santa Fe, NM 87501 USA; * These two authors contributed equally.This manuscript was compiled on June 26, 2020
Primates display a marked ability to learn habits in uncertain anddynamic environments. The associated perceptions and actions ofsuch habits engage distributed neural circuits. Yet, precisely howsuch circuits support the computations necessary for habit learningremain far from understood. Here we construct a formal theory ofnetwork energetics to account for how changes in brain state pro-duce changes in sequential behavior. We exercise the theory in thecontext of multi-unit recordings spanning the caudate nucleus, pre-frontal cortex, and frontal eyefields of female macaque monkeys en-gaged in 60-180 sessions of a free scan task that induces motorhabits. The theory relies on the determination of effective connec-tivity between recording channels, and on the stipulation that a brainstate is taken to be the trial-specific firing rate across those chan-nels. The theory then predicts how much energy will be required totransition from one state into another, given the constraint that activ-ity can spread solely through effective connections. Consistent withthe theory’s predictions, we observed smaller energy requirementsfor transitions between more similar and more complex trial saccadepatterns, and for sessions characterized by less entropic selectionof saccade patterns. Using a virtual lesioning approach, we demon-strate the resilience of the observed relationships between minimumcontrol energy and behavior to significant disruptions in the inferredeffective connectivity. Our theoretically principled approach to thestudy of habit learning paves the way for future efforts examininghow behavior arises from changing patterns of activity in distributedneural circuitry. network control theory; habits; network neuroscience; learning
Introduction
In a complex ever-changing environment, both human andnon-human primates survive by learning to balance the needto gather new knowledge with the utilization of existing knowl-edge (1, 2). The formation of habits can be viewed as a naturalconsequence of locally optimal trade-offs between explorationand exploitation (3). The underlying cognitive processes mayfollow reinforcement learning algorithms (4), in which the sam-pling of actions and the uncertainty of their outcomes informdecisions regarding exploration of new actions or exploitationof old ones (5). The brain mechanisms supporting such pro-cesses engage a distributed set of regions spanning the caudatenucleus associated with repetitive and stereotyped actions (6),the ventral striatum and amygdala associated with rewardand motivation (1), and the prefrontal cortex associated withcognitive control (2). Yet, precisely how this constellation of brain regions supportsthe computations necessary for habits to emerge remains farfrom understood. Recent efforts suggest that network ap-proaches (7) provide useful explanations for how cognitiveprocesses arise from interacting brain regions (8). From intel-ligence to cognitive control, and from motivation to learning,disparate circuits are engaged that allow coordinated informa-tion processing and transmission (9–11). The study of circuitengagement and function can be formalized in the languageof network science (12), and initial evidence suggests thatindividual differences in the pattern of inter-regional interac-tions track individual differences in exploratory behaviors anddecision-making (13), plasticity (14), reinforcement learning(15), and skill learning (16). Although network approachesmanifest striking face validity, the level of explanation hasthus far been largely correlative (17). Continued progress willrequire a formal model positing and validating the networkmechanisms of brain-behavior relations in habit formation.Here we address this challenge by building upon and extendingemerging work in the field of network control theory (18–20). In the context of neural systems, the approach defines astate of the network to be the vector of regional (or cellular)activation. The theory then posits that the sequence of states isconstrained by the energy required to transmute one state intoanother allowing activity to spread solely through known inter-regional links (21). The theoretical background is particularlywell-developed for linear systems (22), or linearizations ofnonlinear systems (18, 21). In addition to predicting the effectsof exogenous control signals such as electrical stimulation (23),network control theory has also proven useful in accountingfor the intrinsic capacity for cognitive control (24) and thecontribution of single neurons to large-scale behaviors (25).We extend the approach in two ways. First, prior studiesstipulated that activity could only flow along known structurallinks between regions; here, we instead allow inter-regionallinks to reflect effective connections (26), which have recentlybeen associated with short-term plasticity and learning (27, 28).Second, prior studies estimated the energy of brain statetransitions independently from behavior; here, we insteadexplicitly posit (and validate) the notion that low energy statetransitions characterize processes that are less cognitivelydemanding, as well as their associated behaviors (29).We evaluate the theory in the context of multi-unit recordingsspanning the caudate nucleus and prefrontal cortex of two
June 26, 2020 | vol. XXX | no. XX | a r X i v : . [ q - b i o . N C ] J un acaque monkeys as they engage in 60-180 sessions of taskperformance inducing motor habits in the form of saccadicpatterns (3). Acknowledging the pivotal nature of sequence-level strategies, we examine a task (Figure 1) in which themonkey must saccade among nine green dots (referred to astargets) on a rectangular grid in search for a baited target,which varied from trial to trial, was randomly selected using apseudorandom schedule, and was visibly not distinguishablefrom the other targets during a task trial (6); the ideal habitwould be a sequence of saccades that spanned all nine targetdots in a minimal time period. We define a brain state tobe a vector of firing rates across recording channels. Further,we construct a neural network whose nodes are channels andwhose edges are the strength of effective connectivity betweenchannel firing rate time series; we represent the network as aweighted directed adjacency matrix.Our primary hypothesis is that pairwise differences in sequen-tial behaviors during habit formation can be explained by theenergy requirements of the accompanying neural state tran-sitions. To interpret behavior, we represent saccade patternsas graphs that can be decomposed into 1D time series (Fig-ure 2A), whose shape can be studied and whose complexitycan be quantified. We observe that preferred saccade pat-terns change as a function of learning. Using network controltheory, we compute the minimum control energy required totransition between chronologically ordered trial brain statesand observe that energy decreases with learning. Finally, weshow that the energy of state transitions predicts behaviorin three distinct ways: (i) transitioning between more simi-lar saccades patterns requires less energy, (ii) transitioningbetween more complex saccades patterns requires less energy,and (iii) a more organized, less-entropic selection of patternsduring sessions requires less energy. This pattern of results ismarkedly consistent with the principles of maximum entropy,which have previously been shown to explain other featuresof neural and behavioral dynamics (30–34). Taken together,our work represents a theoretically principled study of habitlearning that accurately predicts transitions in behavior fromthe energetics of transitions in neural states. Results
The data analyzed in this work consist of behavioral mea-surements and neural recordings from two female macaquemonkeys, Monkey G (MG) and Monkey Y (MY), while per-forming a free-view scanning task. All data were previouslycollected and reported in (3, 6). As depicted in Figure 1, dur-ing the task the monkey is shown a 3 × Classification of Trial Representative Saccade Patterns.
Behavioralmeasurements were analyzed in the form of trial-specificchronological saccade sequences performed by a monkey dur-ing the task. We use the phrase individual saccade to referto a rapid eye movement from one target to another on the
Start TargetsOn TargetsOff Reward PeriodTargetBaited End
Fig. 1. The Free Scan Task . A visual depiction of a single trial from the free scantask. The trial begins (
Start ) with a grid of small grey dots all equally sized and spaced.A 3 × TargetsOn ). The monkey is allowed to freely scan the space of green targets. At a variabletime unknown to the monkey, a target is baited (
Target Baited ). For visualizationpurposes in the context of this exposition, the baited target is depicted surrounded bya red dashed circle; note that the outline is not present during the actual task. Whenthe monkey’s gaze enters the baited target, the grid of green targets disappears andis replaced by all gray targets (
Targets Off ). After a brief variable delay, a liquid rewardis given to the monkey (
Reward Period ) after which the trial is considered finished(
End ). task grid. The phrase saccade sequence then refers to a seriesof individual saccades that are performed one after another.Direct qualitative or quantitative comparison between thetrial-specific saccade sequences is difficult due to variability intrial length and, as a result, the number of saccades per trial.Therefore, we first set out to arrange the list of individualsaccades into a format that allows for interpretable comparisonbetween trials. We began by converting each trial’s saccadesequence into an adjacency matrix of a directed and weightedgraph, G ( N, E ), where N is the number of nodes (one foreach green grid target) and E is the set of all edges that existbetween nodes (Figure 2A). An edge between two nodes existsif a saccade was observed between the two specific grid targets.Furthermore, the weight of each edge is the total number oftimes the specific saccade is performed during the trial. Werefer to this representation of the trial saccades as the saccadenetwork .Saccade patterns that create loops (or sequences that startand end on the same target) are the most effective strategiessince they allow for an efficient and organized approach toscanning the 3 × Supplementary Figure 1 for a graphical depiction ofTRSP identification.Methods for computing the similarity between 1-D signalsare numerous, easy to implement computationally, and providesimple intuitive understanding. Therefore, to make the com-parison between trial representative saccade patterns as simpleas possible we converted each pattern into a one-dimensionalsaccade waveform (Figure 2A). This dimensionality reductionstep was made possible by representing the identified trial rep- et al. accade Information Saccade NetworkAdjacency Matrix Trial RepresentativeSaccade Pattern19 1 9 a MAX0
D(i)
Centroid Distance Calculation S a cc a d e W a v e f o r m R e p r e s e n t a t i o n D1 600 b Saccade WaveformsAgglomerative Clustering 20 30 40 50
Session number
Monkey G F r a c t i o n o f S e ss i o n c Session number F r a c t i o n o f S e ss i o n Monkey Y d Fig. 2. Classification of Trial Representative Saccade Patterns . (a) Saccade information in the form of identified saccadic movements during a trial is collectivelyrepresented as an adjacency matrix, which in turn encodes a directed and weighted network. A total of nine nodes exist: one for every green target on the task grid. Edgeweights are calculated as the number of times that an individual saccade is made from one node to another. The network is converted into a trial representative saccadepattern (TRSP) by identifying the network cycle with the greatest sum of edge weights along its path. Each TRSP is treated as a 2-D polygon in the task grid space consistingof a set of (x,y) points. The saccade waveform is taken to be the vector of Euclidean distances between the polygon centroid and all of its points. A 1-D interpolation isperformed to reduce each saccade waveform to 600 values. (b)
A dissimilarity matrix is constructed utilizing the saccade waveforms from all observed trials. Each element inthe dissimilarity matrix is the Euclidean distance between two saccade waveforms. The dissimilarity matrix is of size T × T where T is the total number of trials for a givenmonkey. Agglomerative clustering with a threshold inconsistency coefficient of 0.95 was performed using the dissimilarity matrix to cluster all trial saccade waveforms. ForMonkey G, we identified a total of 136 cluster whereas for Monkey Y, we identified a total of 346 clusters. (c,d) The five most prevalent cluster saccade patterns across allsessions are shown for each monkey. The saccade pattern shown is the one which is most similar to all other saccade patterns in the same cluster. resentative saccade pattern as a series of (x, y) points in thespace of the task grid and calculating each points’ Euclideandistance from the centroid of all points. We take the similaritybetween any two trial saccade patterns to be a metric basedon the Euclidean distance between the trial saccade waveforms(see Methods). We use this metric to group all the trials intoclusters of saccade patterns based on their similarity to eachother. Since the exact number of present clusters in the datawas not known, the agglomerative clustering algorithm wasused to group trial representative saccade patterns (Figure 2B). This algorithm starts by treating each object (i.e. sac-cade pattern) as a single cluster and uses an iterative processto merge pairs of objects into clusters until all objects aregrouped into one large cluster. The output of the algorithm isa dendrogram (cluster tree) which depicts the order in whichobjects should be grouped during clustering.In order to capture more natural divisions of our dataduring clustering, we used the inconsistency coefficient whichis a useful metric in agglomerative clustering that compares
Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | he height of a link in a cluster tree to heights of all the otherlinks underneath it in the tree. A small coefficient denoteslittle difference between the objects being grouped together,thereby suggesting that the clustering solution is a good fit tothe data. Setting a threshold on the inconsistency coefficientduring clustering enables the identified groupings to moreclosely represent the natural divisions found in the data. Fur-thermore, tuning the inconsistency coefficient threshold allowsfor optimization of the clusters without arbitrarily selecting arange of the maximum number of clusters to test. Using theelbow-method and the average within cluster sum-of-squaresfrom a range of inconsistency coefficient thresholds, we se-lected an inconsistency coefficient of 0.95 to be the optimalthreshold criterion for clustering. As a result, a total of 136clusters were identified for Monkey G and 346 for Monkey Y;see Supplementary Figures 2 and 3 for a full tabulation.In Figure 2C, the five most prominent trial representativesaccade patterns for each monkey as well as their appearancefrequency distribution across all sessions are shown. Thesepatterns and their dynamics closely resemble those previouslyreported in Ref. (3). Both monkeys demonstrate non-uniformdistributions of cluster appearance frequencies across sessions.Each of the main cluster types is acquired, preferentially per-formed, and dropped at varying time windows throughout thetask.
Inferring Effective Connectivity.
After quantitatively characteriz-ing behavior, our next aim was to demonstrate that pairwisedifferences in sequential saccade patterns during habit for-mation can be explained by the energy requirements of theaccompanying neural state transitions. We approached theproblem by using and extending recent advances in networkcontrol theory (18–20). Fundamental to any control energyanalysis is knowledge of the network structure and dynamics.Thus, as a first step we extract a network of interactions be-tween the observed regions from available channels (Figure 3A).In both monkeys, more than half of the present channels wereassociated with the caudate nucleus (CN) and recordings fromBrodmann area 8 (BA-8) were available in both. Althoughthe anatomical location of each channel was known, no infor-mation regarding their anatomical connectivity was availableand we therefore turned to alternative inference approaches(26). Specifically, we inferred the effective connectivity of theregions using their neural activity (see Methods). For eachsession, the activity of the available channels was calculatedas the average firing rate during each individual trial. This setof trial firing rates was used to calculate the transfer entropy(35) between all pairs of available channels, which providesbasic structural information about the effective connectivitybetween them (Figure 3B). The effective connectivity matricesfor Monkey G and Monkey Y are displayed in (Figure 3C,3D).
Assessing the Control Energy Required for Neural State Transitions.
In applying and extending network control theory to under-stand habit formation, our next step is to use the effectiveconnectivity networks to estimate the energy requirement ofneural state transitions. We use the concept of minimumcontrol energy from control theory, which represents the mini-mum amount of input energy necessary to cause a network to transition from a specific initial state of activity to a specificfinal state of activity (Figure 4A) (23). Intuitively, the moreenergy a transition requires, the more difficult it is to reachthe final state.In prior work, minimum control energy has been definedfor mechanical and technological systems, or abstract math-ematical models. To use the approach here, we must firstidentify a relevant dynamical model for the considered net-work process. This model consists of (i) a network state, whichwe define as the trial firing rates (Figure 4B), (ii) a transitionmap for the state, which we define as the inferred effective con-nectivity matrix, and (iii) a set of driver nodes, which includeall the nodes in our study (see Methods). With these variablesdefined, we then estimate the energy required to transmute onestate into another allowing activity to spread solely througheffective connections. Specifically, we calculated the averageminimum control energy (ACE) theoretically required for the
MONKEY G MONKEY Y ab Average Trial Fire Rate Effective ConnectivityMax0 011 T1CSpike Trains c LH RHLHRH 0 1
Effective Connectivity
LH RHLHRH 0 1
Effective Connectivity d Fig. 3. Inferring Effective Connectivity from Neural Activity . (a) Summary ofchannel recording regions for both monkeys. Percentages denote the percent of allavailable channels which record from a given region across all trials. CN = caudatenucleus; BA 8, 9, 13-14, 24-25, 45-46 = Brodmann areas 8 (frontal eye fields), 9(dorsolateral and medial prefrontal cortex), 13-14 (insula and ventromedial prefrontalcortex), 24-25 (anterior and subgenual cingulate), and 45-46 ( pars triangularis andmiddle frontal area). (b)
Spike trains from all channels for a given session were con-verted into an average trial firing rate matrix. The matrix is of size C × T , where T isthe number of trials for a session and C is the number of available channels during thesession. We used transfer entropy (35) to estimate the effective connectivity betweensession channels. (c,d) The overall combined effective connectivity matrices for bothmonkeys are shown where channels are organized according to their respectivehemispheres (LH = Left; RH = Right). Both matrices are individually normalized bydividing all elements by the magnitude of the largest magnitude element. et al. HIGH ENERGY (COSTLY) LOW ENERGY (COST EFFECTIVE)TARGET STATESINITIAL STATE u u u Network Structure b C C Effective Connectivity C M ax Average Trial Fire Rate T c
10 20 30 40 50Session 40 8001025 Session A v e r a g e C o n t r o l E n e r g y ( A C E ) A v e r a g e C o n t r o l E n e r g y ( A C E ) MONKEY G MONKEY Y (Network Structure) (Network States) d Fig. 4. Estimating the Minimum Control Energy to Transition between Neural States . (a) Visual depiction of control energy analysis. Given a network, the goal is toidentify a set of time-dependent inputs (e.g., u ( t ), u ( t ), and u ( t )) into network nodes that drives the system from an initial state to a target state in a fixed period of time. Astate is a × N vector x t whose elements represent the activity of each of N nodes in the network at some time t . The calculation of minimum control energy estimates theenergy required to transmute one state into another allowing activity to spread solely through known inter-regional links. The greater the minimum control energy the morecostly and hard-to-reach that target state is said to be. (b) For a given session, we model the network of channels as a linear time independent system and compute theminimum control energy required to transition between chronologically ordered trial states. A trial state is taken to be the firing rate of each channel during a trial. The topologyof the network is defined by the session effective connectivity matrix. (c-d)
The average minimum control energy (ACE) calculated across all pairwise trial state transitions of aparticular session. ACE dynamics for both monkeys across their respective sessions are shown. Filled boundary areas represent +/- 1 standard deviation. observed trial-to-trial state transitions within each session(see Methods). The ACE dynamics of both monkeys followa similar downward trend throughout the entire experiment(Figure 4C, 4D). A simple linear regression confirmed thatthere was a statistically-significant negative effect betweenaverage minimum control energy and session (Monkey G: β =-.1141 (-.1703, -.0578), R = .30, p < 10 − ; Monkey Y: β =-.0157 (-.0235, -.0080), R =.13, p < 10 − ). See Supplemen-tary Figure 6 for robustness of ACE estimates to variationin model parameters.
Relating the Control Energy to Saccades.
We next sought to quan-titatively characterize how the monkeys’ approaches to thefree-scanning task changed over time. For this purpose, wedefined three metrics: the similarity factor, the complexityfactor, and the cluster label entropy. We will discuss each inturn.
Similarity Factor.
We refer to the first metric as the similarityfactor (SF), which represents the similarity between two trialrepresentative saccade patterns performed one after anotherduring the same session (see Methods). This metric can be usedto answer the question, “Is the monkey performing increasinglysimilar patterns the longer she engages in the task?”. The higher the value of this metric, the more similar the saccadepatterns between trials. It is important to note that this metricwas designed to be orientation-independent and reflection-independent. Accordingly, the similarity factor renders twoinstances of the same pattern as identical even if one wasrotated, the patterns were exact mirror images of each other,or rotations of exact mirror images (see
SupplementaryFigure 4 ). This feature of the similarity metric is shownin (Figure 5A), where patterns 3 and 5 only differ in theirorientation but result in a similarity factor of approximately1. See
Supplementary Figure 5a for the average similarityfactor as a function of session for both monkeys.Although the range of similarity values across task ses-sions for both monkeys was the same (0.55 to 0.80), the averagesimilarity factor distribution of Monkey Y is skewed towardshigher values (Figure 5D). A two-way Kolmogorov-Smirnovtest confirmed that the average similarity factor distributionsof the two monkeys were significantly different from each other( D = 0 . p = 7 . × − ). Furthermore, the Pearsoncorrelation between the average similarity factor and aver-age minimum control energy (for Monkey G, C = − . p = 3 . × − ; for Monkey Y, C = − . p < − ) wassignificantly negative in both monkeys (Figure 5E, 5F). Per- Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | SaccadeSequence Saccade Waveform
Similarity Factor b Complexity Factor S c a nn i n g E ff i c i e n c y Too Simple ExtremelyDisorderedOrganized and Complex
LOW HIGH
Cluster Label Entropy
LOWHIGH cd Average Similarity Factor (ASF) .500 P r o p o r t i o n .500 0.6 0.8 0.6 0.8 Average Complexity Factor (ACF) .60 P r o p o r t i o n .60 .3 Cluseter Label Entropy (CLE) .500 P r o p o r t i o n .500 2 6 2 6.25 MG MY e Average Similarity Factor (ASF) A C E Cluseter Label Entropy (CLE) A C E Average Complexity Factor (ACF) A C E -0.06 0 0.060-55-0.2 0 0.2 0-55-0.2 0 0.2-0.15 -0.05 0.050-550-550-550-55-3 3 C = - 0.447p = 0.0034C = - 0.409p = 0.0079C = 0.278p = 0.0781 f Average Similarity Factor (ASF) A C E Average Complexity Factor (ACF) A C E Cluster Label Entropy (CLE) A C E -3 0 30 Fig. 5. Relating Control Energy to Saccades . (a) The similarity factor (SF) captures information about the similarity between chronologically ordered trial saccade patterns.The similarity between two trial representative saccade patterns is calculated as a metric based on the Euclidean distance between two saccade waveforms. Here we show avisual depiction of 5 example patterns performed by the monkeys, their respective saccade waveform representations, and their similarity to one another. (b)
The complexityfactor (CF) is calculated as the fractal dimension of the saccade pattern. A range of observed patterns and their complexity factors are shown. Both extremely low and extremelyhigh complexity patterns result in poor scanning efficiency during the task. (c)
The cluster label entropy (CLE) metric captures information about the monkey’s preferencetowards exploration of various patterns or exploitation of only a select few during a task session. CLE is calculated as the Shannon entropy of the vector of identified trial clusterlabels from a single session. Higher values indicate preference for constantly exploring a variety of clusters with minimal exploitation of any single cluster. (d)
Side-by-sidecomparison of saccade metric distributions across all sessions for both monkeys. The average similarity and average complexity factors were calculated on a per-session basis. (e-f)
Pearson correlations between all average saccade metrics and the average minimum control energy (ACE) for Monkey G (e) and Monkey Y (f) . Red-dashed lines arepresent for visual aide. et al. utation tests were performed independently for each monkey,to ensure that the observed associations between average simi-larity factor and average minimum control energy were due tothe observed neural circuit architecture (see Methods). Theobserved correlations between average similarity factor andaverage minimum control energy for both monkeys proved tobe significantly more negative than expected in their respectivepermutation null distributions (for Monkey G, p < − ; forMonkey Y, p < − ). Complexity Factor.
We will refer to the second metric that wedefined as the complexity factor (CF), which is a quantitativemeasure of the complexity of an individual trial representativesaccade pattern in a given session. We define complexity asthe fractal dimension of the identified pattern (see Methods).The metric can be used to answer the question, “Is the monkeyapproaching the task in a strategic way or is it simply saccadingat random?”. Both extremely low complexity values andextremely high values are not optimal strategies for scanningthe task grid efficiently. A pattern with a low complexity( ≈
1) is often too simple and does not cover all the targets inthe task. In contrast, a saccade pattern of high complexity(i.e. greater than 1.3) is extremely disordered, tortuous, andseemingly random without any strategy (Figure 5B). Patternsthat strike a balance between organization and complexity offerthe most efficient approach to scanning the 3 × Supplementary Figure 5b for the average complexityfactor as a function of session for both monkeys.Both monkeys performed saccade patterns of similar com-plexity throughout their respective trials, with most sessionsaveraging to values between 1.24 and 1.28 (Figure 5D). How-ever, the full range of complexity that Monkey Y exhibited waslarger than that exhibited by Monkey G, as MY spent severalsessions performing markedly simple patterns. A two-wayKolmogorov-Smirnov test confirmed that the average complex-ity factor distributions of the two monkeys were significantlydifferent from each other ( D = 0 . p = 3 . × − ).Furthermore, the Pearson correlation between the averagecomplexity factor and minimum control energy (for MonkeyG, C = − . p = 7 . × − ; for Monkey Y, C = − . p = 1 . × − ) was significantly negative in both monkeys (Fig-ure 5E, 5F). Permutation tests were performed independentlyfor each monkey, to ensure that the observed associationsbetween the average complexity factor and average minimumcontrol energy were due to the observed neural circuit archi-tecture (see Methods). The observed correlations between theaverage complexity factor and average minimum control energyfor both monkeys proved to be significantly more negative thanexpected in their respective permutation null distributions (forMonkey G, p < − ; for Monkey Y, p < − ). Cluster Label Entropy.
We will refer to the third metric thatwe defined as the cluster label entropy (CLE), which is aquantitative estimate of a monkey’s preference towards patternexploration or exploitation during a task session. It is a directcalculation of Shannon’s information entropy of the vector ofchronologically ordered trial cluster labels in an individualsession. The higher the cluster label entropy of a session, theless ordered the behavior and the more prone the monkeywas to explore a variety of different saccade patterns comingfrom multiple identified clusters. The metric can be used toanswer the question, “Is the monkey choosing to explore many different saccade patterns across trials or does it continuouslyexploit a select few?”. See
Supplementary Figure 5c forthe saccade cluster entropy as a function of session for bothmonkeys.The distribution of cluster label entropy for Monkey Yacross sessions shows that she exhibited moments of bothextreme exploitation (CLE = 2-3) and extreme exploration(CLE = 4-5.5). In contrast, Monkey G exhibited a preferencefor mid-range values of cluster label entropy with a majorityof the task sessions falling in the range of 3-3.5, a balancebetween exploitation and exploration (Figure 5D). A two-wayKolmogorov-Smirnov test confirmed that the cluster label en-tropy distributions of the two monkeys were not significantlydifferent from each other ( D = 0 . p = 0 . C = 0 . p = 0 . C = 0 . p < − ) was significantly positivein Monkey Y only (Figure 5E, 5F). Permutation tests wereperformed independently for each monkey, to ensure that theobserved associations between cluster entropy and averagecontrol energy were due to the observed neural circuit architec-ture (see Methods). The significant correlation between clusterlabel entropy and average minimum control energy found inMonkey Y also proved to be significantly more positive thanexpected from the respective permutation null distribution(for Monkey G, p = 1; for Monkey Y p < − ). Identifying Neural Substrates Particularly Key to the Relation Be-tween Control Energy and Saccades.
In a final step, we seek todetermine which part(s) of the inferred network of brain re-gions significantly contribute to the observed relationshipsbetween control energy and behavior. We do so by performinga virtual lesion analysis consisting of a series of inter- or intra-region edge knockouts in the inferred effective connectivitymatrix (see Figure 6A and
Supplementary Figure 7 for aschematic depiction of this approach). An edge knockout refersto setting the weights of edges in the effective connectivitymatrix to a value of zero, thereby virtually removing connec-tions in the network. Edges whose knockout serves to removethe correlation (resulting in a p -value greater than α = 0 . Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | nter-region Edge Knockout Intra-region Edge KnockoutOriginalNetwork Connectivity a Region ARegion BRegion C CNBA-8BA-9/45/46 C N B A - B A - / / C N B A - B A - / / C N B A - B A - / / b ASF vs AEC ACF vs AEC SCE vs AEC c CNBA-8BA-13/14BA-24/25 B A - B A - / B A - / C N ASF vs AEC ACF vs AEC SCE vs AEC B A - B A - / B A - / C N B A - B A - / B A - / C N Fig. 6. Virtual Region Specific Lesion Analysis. (a)
Visual depiction of lesion analysis workflow. The lesion knockout consists of performing a series of edge-knockoutswhere (i) all edges between two regions are set to zero in the effective connectivity matrix (inter-region), or where (ii) all edges connecting one region to itself are set tozero (intra-region). The average minimum control energy is then calculated using the lesioned effective connectivity matrix, and correlations are computed between theaverage minimum control energy and all saccade metrics. (b-c)
Results of lesion analysis across all possible region combinations for Monkey G (b) and Monkey Y (c) . Allpost-lesion correlations were compared to an equivalent null distribution constructed from random edge lesions. Matrix elements represent the absolute value differencebetween post-lesion and pre-lesion correlations. All grayed out edges represent lesions which did not result in a significant change in correlation value when compared to theirrespective null distribution. both the average similarity factor and cluster label entropy.Although significant changes were found in both monkeys,none were drastic enough to fully disrupt the observed corre-lations between behavior and control energy. The observedbehavior-energy correlations exhibited resilience to disruptionsin the inferred effective connectivity (See
SupplementaryFigure 8 ). In order to successfully disrupt the observed cor-relations in Monkey G, a minimum of 84% of all edges fromits connectivity matrix have to be removed. This minimumthreshold increases for Monkey Y, where at least 95% of alledges have to be set to zero in order to significantly disruptthe correlations. These findings suggest that the energeticconstraints on neural state transitions relevant for behaviorare only partially localized (Figure 6B & Figure 6C), but maybe more accurately described as being broadly distributedacross the circuit.
Discussion
Learning commonly requires the development of strategies toincrease reward in the face of uncertainty (36). Such strategiescan manifest in sequential behaviors that serve to continuouslygather information about the environment (3). Yet preciselywhat rules guide the formation of sequential behaviors re-mains poorly understood. Although recent work highlightsthe relevance of distributed circuitry (37), progress has beenhampered by the lack of a formal theory linking activity insuch circuitry to habitual (or non-habitual) behavior. Herewe address this challenge by positing a network control theoryof how sequences of behaviors arise from the energy require-ments of sequences of neural states occurring atop a complex network structure. Combining behavioral measurements andneural recordings from two female macaque monkeys perform-ing a free-view scanning task over 60–180 sessions (3, 6), wefind that our theory predicts smaller energy requirements fortransitions between trials in sessions with a high-degree ofsimilarity between complex saccade patterns, and in sessionscharacterized by an emphasis on the repetition of a smallsubset of patterns rather than exploration of a more diverseset of distinct patterns. Moreover, we employ a virtual lesion-ing approach to demonstrate that the derived relationshipsbetween control energy and behavior are highly resilient tosmall, local disruptions in the network, suggesting these obser-vations are associated with the network as a whole rather thena small subset of its nodes. Our study advances a theoreticallyprincipled approach to the study of habit formation, providesempirical support for those theoretical principles, and offers ablueprint for future studies seeking to explain how behaviorarises from changing patterns of activity in distributed neuralcircuitry.
Neural circuits as networks.
The study of habit learning,like the study of many other cognitive functions, has bene-fited immensely from lesion studies (38–41), and from focusedrecordings in single brain regions (6, 42–44). Yet, the field haslong appreciated that single regions do not act in isolation,but instead form key components of wider circuits relevantfor perception (45, 46), action (47), and reward (48), amongothers. With recent concerted funding support (49), many newtechnologies are now available for large-scale recording of neu-ral ensembles, including methods for high-density multi-regionrecordings (50, 51) and associated novel electrode technologies(52). The advances support a wider goal to gather evermore et al. etailed measurements of activity across the whole brain (53).Here we use multi-channel, multi-area recordings to better un-derstand the distributed nature of neural circuitry underlyinghabit formation. The channels span Brodmann areas 8 (frontaleye fields), 9 (dorsolateral and medial prefrontal cortex), 13-14(insula and ventromedial prefrontal cortex), 24-25 (anteriorand subgenual cingulate), and 45-46 ( pars triangularis andmiddle frontal area), allowing us to probe multi-area activityand inter-areal interactions that track habitual behavior.Our data naturally motivate the question of how circuitactivity supports behavior. This question is certainly notnew, and not even specific to neural systems. In fact, therecent rapid expansion of work in artificial neural networkshas highlighted the fundamental fact that the architecture ofa network is germane to the system’s function (54). Liquidstate machines (55), convolutional neural networks (56), andBoltzmann machines (57) all perform distinct tasks definedby their architectures. Similarly, in biological neural systems,empirical and computational evidence links the architecture ofprojections with the nature of memory retrieval (58), flexiblememory encoding (59), sequence learning (60), and visuomotortransformation (61). Intuition can be drawn from simple smallarchitectures or network motifs (62) which have markedly dis-tinct computational and control properties (63). For example,a chain is conducive to sequential processing, whereas a gridis more conducive to parallel processing. Unfortunately, thearchitecture of multi-area circuits in the primate brain is notquite so simple, thus hampering basic intuitions and straight-forward predictions. To address this challenge, we use themathematical language of network science (7). The networkapproach allows us to embrace the distributed nature of neuralcircuit activity and quantitatively describe the empirically ob-served architecture, while also formalizing questions regardinghow that architecture supports circuit function (12). Activation and effective connection.
Current efforts incomputational and systems neuroscience are divided by a focuseither on patterns of neural activity or on patterns of neuralconnectivity. At the small scale, this divide separates studiesof the firing rates of neurons from studies of noise correlations(64, 65). At the large scale, this divide separates studies usinggeneral linear models or multivoxel pattern analysis in fMRI(66, 67) from studies using graph theoretical or network ap-proaches (12). A key challenge facing the field is the need tospan this divide, both in experimental and in theoretical inves-tigations (68). Indeed, to take the next step in understandingbehavior requires the development of computational modelsof cognitive processes that conceptually or mathematicallycombine activity and connectivity (11).Here we summarize firing rate activity across channelsas a brain state, and we probe how such states can changegiven the effective connectivity between channels. The factthat network architecture can constrain the manner in whichactivation patterns change (and vice versa (69)) is supportedby empirical evidence in large-scale human imaging (70), anda long history of computational modeling studies in humanand non-human species (68, 71). Here we inform our modelingchoice by noting a particular characteristic of that constraint,which exists in the following form: state x is more likely thanstate y given network A . Specifically, we acknowledge thatneural units that are densely connected are more likely to share the same activity profile than neural units that aresparsely connected, for example due to being located in adistant area. This phenomenom naturally arises in manydynamical systems (see, for example, (72–74)), and its studyhas recently been further formalized in the emerging fieldof graph signal processing (75, 76), which offers quantitativemeasures to evaluate the statistical relations between a patternof activity and an underlying graph. Network control theory.
Beyond positing a probabilisticrelationship between activity and connectivity, we define an energetic relation between them. Our formalization utilizesthe nascent field of network control theory (18–20), whichdevelops associated theory, statistical metrics, and computa-tional models for the control of networked systems, and thenseeks to validate them empirically. The broad utility of theapproach is nicely exemplified in recent efforts that addresssuch disparate questions as how to control the spread of infec-tious disease in sub-Saharan Africa (77), of current in powergrids (78), or of pathology in Alzheimer’s disease (79). In thecontext of neural systems, the theory requires 3 components:(i) a definition of the system’s state, such as population-levelactivity in large-scale cortical areas (23) or cellular activity inmicroscale circuits (25), (ii) a measurement of the connectionsbetween system components, such as white matter tracts (80)or synapses (81), and (iii) a form for the dynamics of statechanges given a network, such as full nonlinear forms (82) orlinearization around the current operating point (83). Herewe let the state reflect the firing rate activity across channels,the network reflect effective connectivity between channels,and the dynamics take the form of a linearization around thecurrent operating point.After formalizing the theory for a given system, one canuse longstanding analytical results to estimate the energy re-quired to move the system from one state to another (84, 85).We focus specifically on the problem of identifying the mini-mum control energy, which is a common subform of the moregeneral problem of identifying the control energy requiredin the optimal trajectory between state i and state j (22).Outside of neuroscience, the approach has proven useful, forexample, in increasing energy efficiency in induction machines(86), enhancing performance of transient manufacturing pro-cesses (87), and managing energy usage in electric vehicles(88), among others. In the context of neural systems, thestudy of such trajectories has been used to address questionsof how the brain moves from its resting or spontaneous stateto states of task-relevant or evoked activity, how the brain’snetwork architecture determines which sets of states requirelittle energy to reach, and how electrical stimulation induceschanges in brain state (23). Here, we use the approach toestimate the amount of energy that is theoretically needed topush the circuit from the state reflecting firing rate activity inone trial to the state reflecting firing rate activity in the nexttrial. By performing the calculation for all pairs of temporallyadjacent trials, we are able to examine changes in energy overthe course of learning. More importantly, structuring the in-vestigation in this way allows us to determine how such energyrelates to pairwise differences in sequential behaviors duringhabit formation. Energetics of habit formation.
In an expansion uponprior work in the application of network control theory to
Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | eural systems, we posit that low energy state transitionscharacterize processes (and their associated behaviors) thatare less cognitively demanding. Informally, the underlyingnotion harks back across at least two centuries in the history ofneuroscience (89). More formally, we can draw on the theory ofmaximum entropy (34), to posit that transitions between lowentropy saccade patterns require greater effort and thereforeenergy than transitions between high entropy saccade patterns.Note that a high entropy saccade pattern is one that spansmany targets in a disordered manner while a low entropysaccade pattern is one that spans few targets in an organized,structured pattern. Concretely, we operationalize the entropyof a trial representative saccade pattern as its fractal dimension,which we refer to as its complexity. Consistent with ourhypothesis, we find that the saccade complexity is negativelycorrelated with the energy theoretically required to move theneural circuit from the firing rate state of one trial to thefiring rate state of the next trial. Broadly, our data join thatacquired in other model systems, anatomical locations, andspecies in providing evidence that maximum entropy modelsexplain key features of neural dynamics (30–32).Moving beyond the assessment of a saccade pattern’sentropy, we next consider the role of habits in modulating thecognitive demands elicited by a task (90, 91). We hypothe-size that transitioning between the same (or similar) saccadepatterns will require less cognitive effort and therefore lessenergy, than transitioning between different (or dissimilar)saccade patterns. Consistent with our hypothesis, we findthat transitioning between more similar saccades is associatedwith smaller predicted energy, and that sessions with a largernumber of distinct saccade patterns are associated with greaterpredicted energy. Collectively, these data comprise a formallink between the energetics of neural circuit transitions andsequential behaviors. The role of localized vs. distributed computations.
Bydefinition, the theoretically predicted energy is a function ofboth the neural states and the underlying network of effectiveconnectivity, and therefore reflects contributions from all chan-nels and from all inter-channel relations. Nevertheless, it isstill of interest to determine whether some channels, or someinter-channel relations, contribute relatively more or less thanothers (23). Using a virtual lesioning approach, we found thatremoval of connections in the caudate nucleus, and connec-tions between BA-8, BA-9/45/46 and BA-13/14, resulted inpredicted energies that caused small but significant decreasesin magnitude to the correlation with saccade metrics. Thecritical role of the caudate nucleus in habit formation is con-sistent with prior lesion studies (38–41) and recording studies(6, 42–44). Moreover, the role of prefrontal cortex is consis-tent with transcranial magnetic stimulation studies showingits necessity for higher-level sequential behavior (92), and itsinvolvement in uncertainty driven exploration (93). Here weextend these prior studies by demonstrating the relevance ofeffective connections in these same areas. Our subsequentrandom lesioning results further extend our understanding ofthe neuroanatomical support for these behaviors by suggestingthat the energetic constraints on neural state transitions arebroadly distributed across the circuit.Another feature of our findings that we find perhapsparticularly striking is their specificity to the two monkeys. Monkey G performed more dissimilar saccade patterns fromtrial-to-trial, consistent with the goal-directed exploration sup-ported by prefrontal connections identified in our lesioninganalysis. In contrast, Monkey Y performed more similar sac-cade patterns from trial-to-trial during sessions, consistentwith lower-level habit formation supported by caudate nucleusidentified in our lesioning analysis. While our study is under-powered to formally probe individual differences, these prelim-inary observations motivate future work examining variationin energy-behavior relations in healthy cohorts and diseasemodels.
Methodological Considerations.
Several methodological con-siderations are particularly pertinent to this work, and here wemention the three that are most salient. First, we note that inunderstanding the manner in which neural units communicate,one might wish to have full knowledge of the structural wiringbetween those neural units (94, 95). Despite recent advancesin technology at the cellular scale (96–98), such information ischallenging to acquire in vivo in large animals, and currentlynot possible at all in primates. A reasonable alternative is touse the empirical measurements of activity to infer the effec-tive relationships between neural units (26, 99, 100). Here wetake precisely this tack, thereby distilling a weighted connec-tivity matrix summarizing the degree to which each channelstatistically affects another. A marked benefit of effective overstructural connectivity between large-scale brain areas is thatonly the former can be used to study temporal variation onthe time scale at which learning occurs (101).A second important consideration pertinent to our workis that we utilize analytical results from the study of linearsystems (22) to inform our network control theory approach(19, 23). It is well known that neural dynamics – measuredin distinct species and across several imaging modalities – arein fact nonlinear (71). Linear models of nonlinear systemsare most useful in predicting behavior in the vicinity of thesystem’s current operating point (21), or for explaining coarsetime-scale population-average activity (e.g., see (102)). For thestudy of other sorts of behavior or signals, future work couldconsider extending our simulations to include appropriatenonlinearities (18).A third important consideration pertinent to our work isthe fact that animal behaviors in general – and saccades in par-ticular – are complex and difficult to describe cleanly (103, 104).Here we address this difficulty by developing a novel algorith-mic approach to the extraction of representative saccade pat-terns. Our method capitalizes on a graphical representation ofsaccades, which in turn allows us to use previously developedtools for the characterization of graphs (7). Our effort followsa growing literature using network models to parsimoniouslyrepresent and study animal behavior (105, 106). Publicly avail-able MATLAB code implementing the algorithm may proveuseful in the context of similar data, and can be found herehttps://github.com/kpszym/SaccadePatternExtraction.git.
Conclusion.
Systematically canvassing uncertain environmentsfor reward induces habitual behaviors and engages distributedneural circuits. Here we offer a formal theory based on theprinciples of network control to account for how pairwise dif-ferences in sequential behaviors during habit formation can et al. e explained by the energetic requirements of the accompa-nying neural state transitions. In doing so, the study framesthe concept of cognitive computations within a formal the-ory of network energetics. Our findings further support thenotion that free energy or maximum entropy are useful ex-planatory principles of behavior. While outside the scope ofthis study, many relevant questions remain unasked. Futurework could usefully expand upon our observations by increas-ing the number of recorded areas or altering the task to includedifferent sorts of environmental uncertainties. Incorporationof additional computational capabilities into the theory, andexercising agent-based simulations to determine optimal costfunctions and associated learning rules for artificial neuralsystems placed in similar environments could also be used toexpand upon this work. Methods
The data consists of behavioral measurements and neuralrecordings from two female macaque monkeys: Monkey G(MG) and Monkey Y (MY). Full descriptions are provided inRefs. (3, 6), and here we briefly summarize. Both monkeyswere individually monitored, and data was recorded while themonkeys performed a free-viewing scan task. The task wasperformed across multiple days (sessions) with each sessionconsisting of multiple back-to-back trials. Eye movements wererecorded utilizing an infrared tracking system and convertedinto a sequence of saccades, or rapid eye movements fromone point to another. All measurements of neural activitywere obtained from individual chronically implanted electrodearrays recording bilaterally from various points in the caudatenucleus (CN), frontal eye fields (FEF) and prefrontal cortex(PFC).
Task Structure.
The task begins when a grid of small graycircles is presented on a screen in front of the monkey, whosehead is fixed in place. After a variable period of time, theinner gray circles of the grid are replaced with a 2 × × Targets On ). The monkey’sgaze may not leave the space defined by the perimeter of thegreen dots or the trial will be marked as unsuccessful and thescreen will revert back to a grid of only gray dots. After avariable time, one of the green targets is baited (
Target Baited )such that if the monkey’s gaze falls on to the baited target,the trial is rewarded. The monkeys were not given informationabout when the target was baited or which target was baited.At this point in the task, if the monkey’s gaze crossed into orover the bait target, the grid of green dots was replaced by theoriginal gray dots (
Targets Off ). After a variable time, themonkey was presented with a short reward to indicate success;acknowledging their preferences, juice was offered to MonkeyG, and an alternative reward mixture was offered to MonkeyY.
Data Quality Assurance and Cleaning.
Due to the inherentcomplexity of the task and behavioral responses thereto, it iscritical to apply data quality standards that ensure statisticalanalyses are appropriate and well-powered. Accordingly, allanalysis involved in inferring effective connectivity was limitedto task trials where the monkeys were presented with the 3 × × Classification of Saccade Sequences.
Conversion to a Saccade Network.
Measurements of monkey eye-movements during the free-scanning task comprise a list ofsaccades performed in the scanning window of a given trial.Each saccade is represented as a vector of two numbers, whichdenote the start and end grid targets of the saccade. Theentire sequence of saccades performed during a given trial canbe written as an m × SL . Each row in SL is a singlesaccade, with the first and second column representing thestart and end targets, respectively. This representation can bethought of as a list of directed connections between points. Itis then possible to convert a trial specific sequence of saccadesinto a directed and weighted graph, which we will refer to asthe saccade network .In graph theory (7), a generic network is made up of N nodes that are connected pairwise by E edges. Here, a saccadenetwork consists of nine nodes ( N = 9), one for each gridtarget, and edges defined by the saccade sequence. Specifically,an edge between two nodes in a saccade network exists if asaccade was performed between those two targets. The weightof the edge is given by the number of times that specificsaccade was performed. Therefore, the saccade network canbe written as the directed and weighted N × N adjacencymatrix, S , whose element S ij denotes the weight of the edgebetween node i and node j . Edges with zero weight signify noconnection. Identifying Trial Representative Saccade Sequences.
We seek toidentify unique saccade patterns across trials, which will inturn allow us to investigate how performance strategies mightevolve throughout the task (3). We refer to the unique saccadepattern performed during a given trial as the trial representa-tive saccade pattern (TRSP). To identify the TRSP, we utilizethe saccade network of a given trial and identify all the cyclesin the network. In graph theory, a cycle is defined as a seriesof edges that allows for a node to be reachable from itself.Therefore, a cycle in the saccade network is a loop that startsand ends on the same target. This cycle can be represented
Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | s a binary cycle matrix , L , of size N × N ; a given element of L is set to one if the corresponding edge is a part of the cycle.We take the dot product of the cycle matrix L and the saccadenetwork S , and refer to the resulting matrix as L . For agiven saccade network, multiple cycles can exist, and thereforealso multiple L s. We define the trial representative saccadepattern to be the cycle with the greatest elementwise sum ofall weights in its L , which intuitively is the cycle composedof the most common point-to-point saccades. For an intu-itive graphical depiction of this process, see SupplementalFigure 1 . Characterizing Similarity between Trial Saccade Sequences.
Mea-suring the similarity between two saccade patterns is difficultin part because many statistics have been developed for thecomparison of two graphs, but it is often unclear when onestatistic is more or less appropriate than another (107). Tocircumvent this issue, it is useful to consider the cyclic pathbetween nodes in a single trial representative saccade patternto be a 2-D polygon drawn on the 3 × n points, P : P = { (x i , y i ) | (x i , y i ) ∈ R } , for i = 1 , . . . n. [1]which finely samples the lines composing the saccade polygon .While a set of points is a simpler representation than agraph, it remains difficult to compare these point sets in amanner that accounts for the original geometry. To addressthis difficulty, we first calculate the centroid of the saccadepolygon, C , and then we calculate the Euclidean distancebetween C and every point in P : D i = q ( P (x i ) − C (x i )) + ( P (y i ) − C (y i )) [2]Then D is a 1-D saccade waveform that parsimoniously repre-sents the saccade polygon while maintaining geometric informa-tion. To ensure comparability across polygons, we interpolateeach D to I = 600 points.The fact that the saccade waveform can be thought of asa time series representation of a saccade pattern informs ourmeasure of similarity. Importantly, we wish our measure to beinvariant to rotation and reflection, such that two polygonsrotated by 90 degrees from one another or two polygons whichare direct mirror images of each other are correctly determinedto be the same. Therefore, rather than simply calculatingthe Euclidean distance between two saccade waveforms, weinstead calculated the Euclidean distances between one saccadewaveform and a series of circularly shifted versions of thesecond saccade waveform. A circular shift is a mathematicaloperation where a vector is rearranged such that the lastelement is moved to the first position and all other elementsare shifted forward by one. By performing this operation l times, it is possible to shift the last l values to the front of thevector and all other values forward by l positions. Accordingly, we create a set of circularly shifted saccadewaveforms that represent rotations of the original saccadesequence by different angles as well as rotations of the mirrorimage of the original saccade sequence. The mirror imageof the saccade sequence can be represented by flipping thesaccade waveform from left to right (see SupplementaryFigure 4 ). We write the angle of rotation as α = 360( lI ). Foreach pair of saccade sequences, a two-step approach is used toquantify their dissimilarity. First, for each pair we calculatedthe Euclidean distances between one saccade waveform andthe circular shifts of a second saccade waveform (includingits mirror image representation) in intervals of α = 6 degreessuch that l ≈
10. From this set of calculations we identifythe circular shift which resulted in the smallest Euclideandistance and denote its index as i min . Next, we repeated theabove process but now performed shifts of size l = 1 suchthat α ≈ .
58. These fine-grained secondary calculationsincluded only circular shifts between i min − i min + 1.The dissimilarity factor (DF) between the two saccades wastaken to be the minimum of the calculated distances in thesecond step; saccades with large distances between them aremore dissimilar. The Average Similarity Factor.
To summarize the saccade patternsimilarity between two adjacent trials within a task session, wedefined the similarity factor (SF). For each session s contain-ing T s trials, the saccade dissimilarity was calculated betweensaccade patterns from pairs of consecutive trials as describedin the previous section. Each value was then converted intoa measure of similarity as follows: SF = 1 − DFDF max , where DF max is the maximum dissimilarity factor observed out ofall trials from both monkeys. For each session the averagesimilarity factor was then given by the average of all similar-ity factors calculated for that session. See SupplementaryFigure 5a for the average similarity factor as a function ofsession for both monkeys.
The Complexity Factor.
To characterize the complexity of eachsaccade pattern, we defined the complexity factor . We opera-tionalized the notion of complexity as the fractal dimension(108, 109), which has proven useful in the study of many otherbiological (110, 111) and network systems (112). For each trialrepresentative saccade pattern, we first constructed a binaryimage of its polygon representation, where the backgroundwas black and the saccade pattern outline was white. We thenapplied the box-counting method to this image to computethe fractal dimension (113). Due to the nature of this method,two identical patterns rotated by 90 degrees from one anotherwould result in different fractal dimension values. Therefore,the final fractal dimension value for each trial pattern wastaken to be the minimum calculated from the set of all possiblerotations of the pattern and its mirror image at intervals of 90degrees. For each session, s , containing T s trials, the averagecomplexity factor was given by the average fractal dimensionof all trial representative saccade patterns in that session.See Supplementary Figure 5b for the average complexityfactor as a function of session for both monkeys.
The Cluster Label Entropy.
To quantify the extent to which themonkey selects saccade patterns from trial to trial in an orderedfashion, we defined the cluster label entropy metric. More et al. recisely, our goal was to determine whether the monkey wasrandomly performing various patterns or selectively repeatingonly a few unique ones. We began by using the MATLABfunction linkage() to perform agglomerative clustering on thesaccade waveforms from all the trials of an individual monkey(114). The algorithm outputs a hierarchical, binary clustertree, also known as a dendrogram, based on an input of a T × T distance matrix, where the ij -th element gives thedissimilarity factor DF between the saccade pattern of trial i and the saccade pattern of trial j . The height of a link betweentwo objects in the dendrogram directly denotes the distancebetween those two objects in the data.It is important to note that the dendrogram itself doesnot indicate the optimal number of clusters that the datashould be split into; rather, it demonstrates the order in whichobjects should be clustered. However, there is a way to identifythe natural divisions of the data into distinct clusters usingthe derived cluster tree. The inconsistency coefficient metric isused to compare the height of a link in a cluster tree to heightsof all the other links underneath it in the tree. If the differenceis dramatic, it signifies that that newly formed group consistsof linking two highly distinct objects. Imposing a threshold onthe inconsistency coefficient during clustering captures morenatural divisions in the data rather than arbitrarily setting amaximum number of possible clusters.Accordingly, using the MATLAB function cluster() weconstructed clusters from the hierarchical cluster tree using arange of inconsistency coefficient values (0.1-1.5 in intervals of0.05) as a threshold criterion, and then calculated the averagewithin cluster sum-of-squares. Using the elbow-method andthe calculated within cluster sum-of-squares, we selected aninconsistency coefficient of 0.95 to be the optimal thresholdcriterion for clustering. This choice resulted in a total of 136clusters being identified for Monkey G and 346 for Monkey Y.See Supplementary Figures 2 and 3 for a detailed listingof cluster patterns of both monkeys.After coarse-graining the data by identifying saccadeclusters across trials and sessions, we next turned to assessingwhether the monkey transitioned between saccades of thesame cluster or of different clusters, and to what degree. Webegan by identifying the representative saccade sequence ofeach discovered cluster by finding the trial representativesaccade pattern that had the minimum sum of the dissimilarityfactor when compared to all other within-cluster patterns. Foreach session, we then calculated the cluster label entropy asShannon’s information entropy of a given session’s vectorof trial cluster labels. See
Supplementary Figure 5c forthe saccade cluster entropy as a function of session for bothmonkeys.
Channel Firing Rates.
Information about neuronal activity wasnot available for all channels during each session; some chan-nels, which we refer to as non-viable channels, showed noactivity across all task trials. All non-viable channels werediscarded prior to data analysis. MG contained a total of59 viable channels and MY contained a total of 64 viablechannels. On average, a given free-scanning task session con-tained 23 active channels for MG and 11 active channels forMY. While biologically expected, this variation in channelavailability across sessions can adversely affect estimates of effective connectivity (see next section). For example, if wewere to compute the effective connectivity from the activity ofall channels across all trials at the same time, we could obtainspurious results due to the incomplete sampling across trialsand time.To mitigate potential biases due to variable channel avail-ability, we estimate the effective connectivity in each sessionseparately. Every session contains N CH channels from whicha signal is available. The signal from each channel is in theform of a spike train, or a vector of ones signifying neuronalactivation, and the time at which each activation occurred.All spikes from the full duration of a trial were used wheninferring effective connectivity. For analysis concerned withidentifying the relationship between behavior and control en-ergy, we focused solely on spikes that occurred after the taskgrid was presented to the monkeys ( t targets on ) and before thetask grid disappeared signifying success ( t targets off ). Thiswindow of time is referred to as the scanning period ( t sp ).The firing rate r of a single channel is given by the numberof spikes per second. Calculation of the fire rate of all channelsduring an individual trial then results in a 1 × N CH vectorthat represents the activation state of the channel networkduring that trial. We will refer to this column vector as theneural state x t : x t = r r ... r N CH , where t =1 , . . . T s . [3]State vectors x t are calculated for each trial, across all sessions,and for each monkey. Inferring Effective Connectivity.
Effective connectivity pro-vides information that differs from both functional connectivityand structural connectivity (26). For nearly three decades,effective connectivity has been “understood as the experimentand time-dependent, simplest possible circuit diagram thatwould replicate the observed timing relationships between therecorded neurons” (115). The pattern of effective connectivityamong many units can be usefully represented as a network,composed of nodes (neural units) and edges (effective connec-tions) derived from node activity. Here, we construct sucha network for the set of electrode channels used to recordneuronal activity during trials of the free-scanning task. Wechose transfer entropy as the method to estimate effective con-nectivity (35), although we acknowledge that other methodsexist and could similarly prove useful in the study of habitlearning. For each session, we computed the transfer entropybetween all pairs of viable-channels from the set of z -scoredstate vectors x t of size 1 × N CH to obtain an N CH × N CH directed, unsymmetrical effective connectivity matrix whosediagonal elements are set to zero. All calculations of transferentropy were performed using calc_te() function from the RTransferEntropy package in R.For completeness, we gather all session effective connec-tivity matrices into the 3-D matrix M whose element M i,j,k represents the effective connectivity between channels j and k ,derived from the i th session. From the individual session effec-tive connectivity matrices, we calculated the overall effective Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | onnectivity matrix, M , whose element M i,j is given by theaverage of the set of the non-zero connection strengths betweenchannels i and j derived from only the sessions in which bothchannels were available. Accordingly, M is a square directedand unsymmetrical matrix of size N TC × N TC , where N TC isthe total amount of available channels across all sessions foran individual monkey. Network Control Theory.
To build an intuition for how weuse network control theory to probe relations between neuralcircuit activity and behavior, we begin with a few preliminaries.We consider a nonlinear dynamical system and linearize thosedynamics about the system’s current operating point (22). Thedynamics of the resultant linear time invariant (LTI) systemcan be written as: ˙ x = A x ( t ) + B u ( t ) [4]where N is the number of nodes, A is the N × N adjacencymatrix, x ( t ) = [ x ( t ) , x ( t ) , . . . x N ( t )] is the state of allnetwork nodes at time t, u ( t ) = [ u ( t ) , u ( t ) , . . . u K ( t )] givesthe external control input for K number of driver nodes whichreceive external input in order to drive the state change of thenetwork. In this work, all network nodes are set to be drivernodes ( K = N ) and as such B is the N × N identity matrix. Average Minimum Control Energy.
The minimum control energy isdefined to be the minimum amount of energy that a controllerrequires to drive an LTI system from some initial state to atarget final state in a specified amount of time (22). Thereexists an input vector u ( t ) that can move the defined net-work from its initial state, x o , to a state x f in time t f , withthe minimum energy expenditure, E min ( t f ) = R t f k u ( τ ) k dτ .Practically, we can calculate the minimum control energy toreach the target network state ( x f ) from an initial state ( x o )as E min ( t f ) = (cid:0) e A t f x o − x f (cid:1) W − c ( t f ) (cid:0) e A t f x o − x f (cid:1) , [5]where T = t f is the time horizon and W − c ( T ) is the control-lability Gramian, W − c ( t f ) = Z t f e A τ BB T e A T τ dτ [6]for the system. The time horizon is set to a value of 1 for allcalculations in this analysis.Here we use this framework to compute the average min-imum control energy required to move the neural circuit fromfiring rate state x t to firing rate state x t +1 given the effectiveconnectivity A s = M i,j for all channels in that session. Notethat x t and x t +1 are the firing rate states of two consecutivetrials within a given session. Thus, we obtain an E min valuefor every consecutive trial pair. The average minimum controlenergy (ACE) metric is then the average of E min across allstate transitions in that session. Control Energy & Saccade Characteristics.
We calculated Pearsoncorrelation coefficients between the three saccade characteris-tic metrics (similarity factor, complexity factor, cluster labelentropy) and the average minimum control energy. The num-ber of channels (variable across sessions) was regressed out of all variables included in correlations. To ensure that all corre-lations were specific to the control energy dynamics derivedfrom the inferred overall effective connectivity matrix, M , wepermuted that vector of average control energy (1000 times)and recalculated all Pearson correlations per permutation. Aone-tailed test was then used to determine the significance ofthe Pearson correlation coefficients calculated with the originalaverage control energy dynamics against their respective nulldistributions, at a significance level of α = 0 . Network Region Lesion Analysis.
For both monkeys, electrodechannels recorded bilaterally from regions of the caudate nu-cleus, frontal eye fields, and prefrontal cortex. In order toexamine the extent to which specific nodes or edges contributeto the inferred overall effective connectivity matrix, we per-formed a virtual lesion analysis. Here a lesion is operationalizedby setting specific elements M i,j of the effective connectivitymatrix to zero, thereby effectively eliminating the connectionbetween the i th and j th nodes. Each monkey had a uniqueset of regions, R = { R , R , . . . R N } , from which the N TC channels were recording such that each channel was assignedonly one region across the entire duration of the task.For each monkey, we performed two types of lesions. First,we lesioned all the edges between any two regions, R i and R j ,and we refer to this method as the inter-region edge knockout .Second, we lesioned all edges belonging to the same region,and we refer to this method as the intra-region edge knockout .Each lesion results in a knockout effective connectivity matrixwhich we denote as M KO . Therefore, if performing an inter-region edge knockout between the caudate nucleus ( R ) andBrodmann Area 8 ( R ) then M KO would be the same asthe original effective connectivity matrix, M , except that allelements that represent connections between R and R areset to zero. In the same way, if performing an intra-region edgeknockout lesion of the caudate nucleus then M KO would be thesame as M but have all elements that represent connectionsof caudate nucleus nodes to other caudate nucleus nodes setto zero.To determine the relevance of a connection for an energy-behavior correlation, we used two criteria. The first criterionwas that the lesion resulted in a correlation value that wasnot significantly different ( p > .
05) from that obtained usingthe original permutation null model (random permutationsof the original average minimum control energy vector val-ues). The obtained p-value is referred to as p general ( seeSupplementary Figure 7b and 7d ). To assess this crite-rion, we calculate the knockout ACE metric, ACE KO , foreach lesion and use it to recompute the Pearson correlationvalues between ACE KO and the average saccade metrics. Wetest the significance of each knockout correlation value usinga one-tailed test against the null distribution derived fromcalculations involving the original permutation null model. Aknockout correlation that fails to prove significant from theabove one-tailed test signifies that the inter- (or intra-) re-gion edges knocked out are important to the inferred effectiveconnectivity matrix and its relationship to task behavior.The second criterion for determining the relevance of aconnection for an energy-behavior correlation was that thelesion-induced disruption of the observed correlation was spe-cific to the lesion chosen, and not expected by lesioning the et al. ame number of randomly chosen edges. To assess this crite-rion, every knockout correlation value is then compared to theoriginal correlation value by calculating the absolute value ofthe difference between them, resulting in a correlation differ-ence metric for each lesion. To ensure that the difference incorrelations is truly related to the knocking out of the lesionspecific edges, the results are tested using the following nullmodel. We state the null hypothesis that for a given lesion con-sisting of knocking out n specific edges E KO = { e , e , . . . .e n } ,the resulting correlation difference value is no different thanthe correlation difference value derived from knocking out theset of n randomly selected edges, E null = { e i | e i / ∈ E KO } .Therefore, for every lesion a null distribution of 1000 correla-tion values was created by randomly knocking out the samenumber of edges as the original lesion with no one edge beingthe same as any knocked out in the original lesion. All valueswere tested against their respective null distribution using aone-tailed test with a significance level of α = 0 .
05. The ob-tained p-value is referred to as p lesion ( see SupplementaryFigure 7c and 7e ). Gray boxes shown in (Figure 6B-C) indi-cate edges which when lesioned did not result in a significantchange in behavior-energy correlation when compared to therespective null distribution derived from randomly lesioningedges.Due to the small magnitude changes in behavior-energycorrelations caused by inter-/intra-region virtual lesioning, wenext sought to quantify the number of edge lesions that arerequired to completely disrupt an observed behavior-energycorrelation. Accordingly, for each monkey the resilience ofeach behavior-control correlation was tested by performingincreasingly large lesions and re-calculating the correlationvalues. The analysis started with 1-edge lesions and endedwith lesions involving up to 95% of all available edges. Eachlesion was performed 100 times with random edges and theaverage changes in behavior-energy correlation are shown in Supplementary Figure 8 for Monkey G and Monkey Y.
Acknowledgments
We thank Jason Z. Kim, Eli J. Cornblath, Pragya Srivastava,Christopher W. Lynn, Harang Ju, Sophia David, Jennifer A.Stiso, William Qian, and David M. Lydon-Staley for helpfulcomments on earlier version of this manuscript. The work wasprimarily supported by an ARO MURI awarded to Bassett &Graybiel (Grafton-W911NF-16-1-0474). The work was furthersupported by the John D. and Catherine T. MacArthur Foun-dation, the Alfred P. Sloan Foundation, the ISI Foundation,the Paul Allen Foundation, the Army Research Laboratory(W911NF-10-2-0022), the National Institute of Mental Health(2-R01-DC-009209-11, R01-MH112847, R01-MH107235, R21-M MH-106799), the National Institute of Child Health andHuman Development (1R01HD086888-01), National Instituteof Neurological Disorders and Stroke (R01 NS099348), andthe National Science Foundation (NSF PHY-1554488, BCS-1631550, and IIS-1926757). The content is solely the respon-sibility of the authors and does not necessarily represent theofficial views of any of the funding agencies.
Citation Diversity Statement
Recent work in several fields of science has identified a biasin citation practices such that papers from women and otherminorities are under-cited relative to the number of such pa-pers in the field (116–120). Here we sought to proactivelyconsider choosing references that reflect the diversity of thefield in thought, form of contribution, gender, and other fac-tors. We obtained predicted gender of the first and last authorof each reference by using databases that store the proba-bility of a name being carried by a woman (120, 121). Bythis measure (and excluding self-citations to the first andlast authors of our current paper), our references contain10.6% woman(first)/woman(last), 5.8% man/woman, 19.2%woman/man, 57.7% man/man, and 6.73% unknown catego-rization. This method is limited in that a) names, pronouns,and social media profiles used to construct the databases maynot, in every case, be indicative of gender identity and b) itcannot account for intersex, non-binary, or transgender people.We look forward to future work that could help us to betterunderstand how to support equitable practices in science.
References.
1. Vincent D. Costa, Andrew R. Mitz, and Bruno B. Averbeck. Subcortical substrates of explore-exploit decisions in primates.
Neuron , 103(3):533–545.e5, 2019.2. Becket R. Ebitz, Eddy Albarran, and Tirin Moore. Exploration disrupts choice-predictivesignals and alters dynamics in prefrontal cortex.
Neuron , 97:450–461.e9, 2018.3. Theresa M. Desrochers, Dezhe Z. Jin, Noah D. Goodman, and Ann M Graybiel. Optimalhabits can develop spontaneously through sensitivity to local cost.
Proceedings of the Na-tional Academy of Sciences , 107(47):20512–20517, 2010.4. Leslie P. Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: Asurvey.
Journal of Artificial Intelligence Research. , 4:237, 1996.5. Samuel J. Gershman. Deconstructing the human algorithms for exploration.
Cognition , 173:34–42, 2018.6. Theresa M. Desrochers, Ken-ichi Amemori, and Ann M. Graybiel. Habit learning by naivemacaques is marked by response sharpening of striatal neurons representing the cost andoutcome of acquired action sequences.
Neuron , 87(4):853–868, 2015.7. Melanie Mitchell.
Complexity: A Guided Tour 1st Edition . Oxford University Press, 2011.8. Monica D Rosenberg, Dustin Scheinost, Abigail S Greene, Emily W Avery, Young H Kwon,Emily S Finn, Ramachandran R Ramani, Maolin Qiu, R Todd Constable, and Marvin MChun. Functional connectivity predicts changes in attention observed across minutes, days,and months.
Proc Natl Acad Sci U S A , 117(7):3797–3807, 2020.9. Aron K. Barbey. Network neuroscience theory of human intelligence.
Trends in CognitiveSciences , 22(1):8–20, 2018.10. Manesh Girn, Caitlin Mills, and Kalina Christoff. Linking brain network reconfiguration andintelligence: Are we there yet?
Trends in Neuroscience and Education , 15:62–70, 2019.11. Danielle S. Bassett and Marcelo G. Mattar. A network neuroscience of human learning:Potential to inform quantitative theories of brain and behavior.
Trends in Cognitive Sciences ,21(4):250–264, 2017.12. Danielle S. Bassett, Perry Zurn, and Joshua I. Gold. On the nature and use of models innetwork neuroscience.
Nature Reviews Neuroscience , 19(9):566–578, 2018.13. Chang-Hao Kao, Ankit N. Khambhati, Danielle S. Bassett, Matthew R. Nassar, Joseph T.McGuire, Joshua I. Gold, and Joseph W. Kable. Functional brain network reconfigurationduring learning in a dynamic environment.
Nature Communications , 800284, 2019.14. Courtney L. Gallen and Mark D’Esposito. Brain modularity: A biomarker of intervention-related plasticity.
Trends in Cognitive Science , 23(4):293–304, 2019.15. Raphael T. Gerraty, Juliet Y. Davidow, Karin Foerde, Adriana Galvan, Danielle S. Bassett,and Daphna Shohamy. Dynamic flexibility in striatal-cortical circuits supports reinforcementlearning.
Journal of Neuroscience , 38(10):2442–2453, 2018.16. Danielle S. Bassett, Muzhi Yang, Nicholas F. Wymbs, and Scott T. Grafton. Learning-inducedautonomy of sensorimotor systems.
Nature Neuroscience , 18(5):744–751, 2015.17. Maxwell A. Bertolero and Danielle S. Bassett. On the nature of explanations offered bynetwork science: A perspective from and for practicing neuroscientists.
Topics , Epub Aheadof Print, 2019.18. Adilson E. Motter. Networkcontrology.
Chaos , 25(9):097621, 2015.19. Fabio Pasqualetti, Sandro Zampieri, and Francesco Bullo. Controllability metrics, limitationsand algorithms for complex networks.
IEEE Transactions on Control of Network Systems , 1(1):40–52, 2014.20. Yang Y. Liu, Jean J. Slotine, and Albert L. Barabási. Controllability of complex networks.
Nature , 473(7346):167–173, 2011.21. Jason Z. Kim and Danielle S. Bassett. Linear dynamics and control of brain networks. InBin He, editor,
Neuroengineering . Springer, 2020.22. Thomas Kailath.
Linear Systems . Prentice-Hall, 1980.23. Jennifer Stiso, Ankit N. Khambhati, Tommaso Menara, Ari E. Kahn, Joel M. Stein, Sandi-hitsu R. Das, Richard Gorniak, Joseph Tracy, Brian Litt, Kathryn A. Davis, Fanio Pasqualetti,
Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | imothy H. Lucas, and Danielle S. Bassett. White matter network architecture guides directelectrical stimulation through optimal state transitions. Cell Reports , 28(10):2554–2566.e7,2019.24. Eli J. Cornblath, Evelyn Tang, Graham L. Baum, Tyler M. Moore, Azeez Adebimpe, David R.Roalf, Ruben C. Gur, Raquel E. Gur, Fabio Pasqualetti, Theodore D. Satterthwaite, andDanielle S. Bassett. Sex differences in network controllability as a predictor of executivefunction in youth.
Neuroimage , 188:122–134, 2019.25. Gang Yan, Petra E. Vértes, Emma K. Towlson, Yee L. Chew, Denise S. Walker, William R.Schafer, and Albert L Barabási. Network control principles predict neuron function in theCaenorhabditis elegans connectome.
Nature , 550(7677):519–523, 2017.26. Karl J. Friston. Functional and effective connectivity: A review.
Brain Connectivity , 1(1):13–36, 2011.27. Danai Dima, Karl J. Friston, Klass E. Stephan, and Sophia Frangou. Neuroticism and con-scientiousness respectively constrain and facilitate short-term plasticity within the workingmemory neural network.
Human Brain Mapping , 36(10):4158–4163, 2015.28. Christian Büchel, J T Coull, and Karl J. Friston. The predictive value of changes in effectiveconnectivity for human learning.
Science , 283(5407):1538–1541, 1999.29. Urs Braun, Anais Harneit, Giulio Pergola, Tommaso Menara, Axel Schaefer, Richard F. Bet-zel, Zhenxiang Zang, Janina I. Schweiger, Kristina Schwarz, Junfang Chen, Giuseppe Blasi,Alessandro Bertolino, Daniel Durstewitz, Fabio Pasqualetti, Emanuel Schwarz, AndreasMeyer-Lindenberg, Danielle S. Bassett, and Heike Tost. Brain state stability during work-ing memory is explained by network control theory, modulated by dopamine D1/D2 receptorfunction, and diminished in schizophrenia. arXiv , 1906:09290, 2020.30. Christina Savin and Gasper Tkaˇcik. Maximum entropy models as a tool for building preciseneural controls.
Current Opinion in Neurobiology , 46:120–126, 2017.31. Einat Granot-Atedgi, Gasper Tkaˇcik, R. Segev, and Elad Schneidman. Stimulus-dependentmaximum entropy models of neural population codes.
PLoS Computational Biology , 9(3):e1002922, 2013.32. Leenoy Meshulam, J L Gauthier, Carlos D. Brody, David W. Tank, and William Bialek. Col-lective behavior of place and non-place neurons in the hippocampal network.
Neuron , 96(5):1178–1191.e4, 2017.33. Karl Friston, James Kilner, and Lee Harrison. A free energy principle for the brain.
Journalof Physiology - Paris , 100:70–87, 2006.34. Pedro A. Ortega and Daniel A. Braun. Thermodynamics as a theory of decision-making withinformation-processing costs.
Proceedings of the Royal Society A , 469(2012):0683, 2013.35. Raul Vicente, Michael Wibral, Michael Lindner, and Gordon Pipa. Transfer entropy—amodel-free measure of effective connectivity for the neurosciences.
Journal of Computa-tional Neuroscience , 30(1):45–67, 2011.36. Jacqueline Gottlieb and Pierre Y. Oudeyer. Towards a neuroscience of active sampling andcuriosity.
Nature Reviews Neuroscience , 19(12):758–770, 2018.37. Kyle S. Smith and Ann M. Graybiel. A dual operator view of habitual behavior reflectingcortical and striatal dynamics.
Neuron , 79(2):361–374, 2013.38. Edmond Teng, Lisa Stefanacci, Larry R. Squire, and Stuart M. Zola. Contrasting effects ondiscrimination learning after hippocampal lesions and conjoint hippocampal-caudate lesionsin monkeys.
The Journal of Neuroscience , 20(10):3853–3863, 2000.39. Cathy J. Price, Maria LuisaGorno-Tempini, Kim S. Graham, Nora Biggio, Andrea Mechelli,Karalyn Patterson, and Uta Noppeney. Normal and pathological reading: converging datafrom lesion and imaging studies.
Neuroimage , 20(Supplement 1):S30–S41, 2003.40. Alicia Izquierdo and Elisabeth A. Murray. Combined unilateral lesions of the amygdala andorbital prefrontal cortex impair affective processing in rhesus monkeys.
Journal of Neuro-physiology , 94(5):2023–2039, 2004.41. Terrell A. Jenrette, Jordan B. Logue, and Kristen A. Horner. Lesions of the patch com-partment of dorsolateral striatum disrupt stimulus-response learning.
Neuroscience , 415:161–172, 2019.42. Lina Yassin, Brett L. Benedetti, Jean-Sébastien Jouhanneau, Jing A. Wen, James F.A.Poulet, and Alison L. Barth. An embedded subnetwork of highly active neurons in the neo-cortex.
Neuron , 68(6):1043–1050, 2010.43. Marianna Yanike and Vincent P. Ferrera. Representation of outcome risk and action in theanterior caudate nucleus.
Journal of Neuroscience , 34(9):3279–3290, 2014.44. Hyoung F. Kim, Ali Ghazizadeh, and Okihide Hikosaka. Dopamine neurons encoding long-term memory of object value for habitual behavior.
Cell , 163(5):1165–1175, 2015.45. Clarissa J. Whitmire and Garrett B. Stanley. Rapid sensory adaptation redux: A circuitperspective.
Neuron , 92(2):298–315, 2016.46. Mona Garvert, Karl J. Friston, Raymond J. Dolan, and Marta I. Garrido. Subcortical amyg-dala pathways enable rapid face processing.
Neuroimage , 102(2):309–316, 2014.47. Hiroshi Makino, Eun J. Hwang, Nathan G. Hedrick, and Takaki Komiyama. Circuit mecha-nisms of sensorimotor learning.
Neuron , 92(4):705–721, 2016.48. Julia Cox and Ilana B. Witten. Striatal circuits for reward learning and decision-making.
Nature Reviews Neuroscience , 20(8):482–494, 2019.49. Elizabeth Litvina, Amy Adams, Alison Barth, Marcel Bruchez, James Carson, Jason E.Chung, Kristin B. Dupre, Loren M Frank, Kathleen M. Gates, Kristen M. Harris, Hannah Joo,Jeff William Lichtman, Khara M. Ramos, Terrence Sejnowski, James S. Trimmer, SamanthaWhite, and Walter Koroshetz. BRAIN Initiative: Cutting-edge tools and resources for thecommunity.
Journal of Neuroscience , 39(42):8275–8284, 2019.50. Joseph Feingold, Theresa M. Desrochers, Naotaka Fujii, Ray Harlan, Patrick L. Tierney,Hideki Shimazu, Ken-Ichi Amemori, and Ann M. Graybiel. A system for recording neuralactivity chronically and simultaneously from multiple cortical and subcortical regions in non-human primates.
Journal of Neurophysiology , 107:1979–1995, 2012.51. Jason E Chung, Hannah R. Joo, Jiang L. Fan, Daniel F. Liu, Alex H. Barnett, Supin Chen,Charlotte Geaghan-Breiner, Mattias P. Karlsson, Magnus Karlsson, Kye Y. Lee, Hexin Liang,Jeremy F. Magland, Jeanine A. Pebbles, Angela C. Tooker, Leslie F. Greengard, Vanessa M.Tolosa, and Loren M. Frank. High-density, long-lasting, and multi-region electrophysiologicalrecordings using polymer electrode arrays.
Neuron , 101(1):21–31.e5, 2019.52. Guosong Hong and Charles M. Lieber. Novel electrode technologies for neural recordings.
Nature Reviews Neuroscience , 20(6):330–345, 2019.53. David Kleinfeld, Lan Luan, Partha P. Mitra, Jacob T. Robinson, Rahul Sarpeshkar, KennethShepard, Chong Xie, and Timothy D. Harris. Can one concurrently record electrical spikesfrom every neuron in a mammalian brain?
Neuron
Neuralcomputation , 14(11):2531–2560, 2002.56. Yann LeCun, L´éon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learningapplied to document recognition.
Proceedings of the IEEE , 86(11):2278–2324, 1998.57. Geoffrey E. Hinton and Terrence J Sejnowski. Learning and relearning in Boltzmann ma-chines.
Parallel distributed processing: Explorations in the microstructure of cognition , 1:282–317, 1986.58. Priyamvada Rajasethupathy, Sethuraman Sankaran, James H. Marshel, Christina K. Kim,Emily Ferenczi, Soo Y. Lee, Andre Berndt, Charu Ramakrishnan, Anna Jaffe, Maisie Lo,Conor Liston, and Karl Deisseroth. Projections from neocortex mediate top-down control ofmemory retrieval.
Nature , 526(7575):653–659, 2015.59. Carino Curto, Anda Degeratu, and Vladimir Itskov. Flexible memory networks.
Bulletin ofMathematical Biology , 74(3):590–614, 2012.60. Kanaka Rajan, Christopher D. Harvey, and David W. Tank. Recurrent network models ofsequence generation and memory.
Neuron , 90(1):128–142, 2016.61. Scott T. Murdison, Guillaume Leclercq, Philippe Lefèvre, and Gunnar Blohm. Computationsunderlying the visuomotor transformation for smooth pursuit eye movements.
Journal ofNeurophysiology , 113(5):1377–1399, 2015.62. Nadav Kashtan and Uri Alon. Spontaneous evolution of modularity and network motifs.
Proceedings of the National Academy of Sciences , 102(39):13773–13778, 2005.63. Andrew J. Whalen, Sean N. Brennan, Timothy D. Sauer, and Steven J. Schiff. Observabil-ity and controllability of nonlinear networks: The role of symmetry.
Physical Review X , 5:011005, 2015.64. Brent Doiron, Ashok Litwin-Kumar, Robert Rosenbaum, Gabriel K Ocker, and Krešimir Josi´c.The mechanics of state-dependent neural correlations.
Nature Neuroscience , 19(3):383–393, 2016.65. Adam Kohn, Ruben Coen-Cagli, Ingmar Kanitscheider, and Alexandre Pouget. Correlationsand neuronal population information.
Annual Review of Neuroscience , 39:237–256, 2016.66. Karl J Friston. Models of brain function in neuroimaging.
Annual Review of Psychology , 56:57–87, 2005.67. Abdelhak Mahmoudi, Sylvain Takerkart, Fakhita Regragui, Driss Boussaoud, and AndreaBrovelli. Multivoxel pattern analysis for FMRI data: a review.
Computational and Mathemat-ical Methods in Medicine , 2012:961257, 2012.68. Gabriel K. Ocker, Krešimir Josi´c, Eric Shea-Brown, and Michael A. Buice. Linking structureand activity in nonlinear spiking networks.
PLoS Computational Biology , 13(6):e1005583,2017.69. Jannis Schuecker, Maximilian Schmidt, Sacha J. van Albada, Markus Diesmann, and MoritzHelias. Fundamental activity constraints lead to specific interpretations of the connectome.
PLoS Computational Biology , 13(2):e1005179, 2017.70. Takuya Ito, Luke Hearne, Ravi Mill, Carrisa Cocuzza, and Michael W. Cole. Discoveringthe computational relevance of brain network organization.
Trends in Cognitive Sciences ,S1364–6613(19):30240–2, 2019.71. Michael Breakspear. Dynamic models of large-scale brain activity.
Nature Neuroscience ,20(3):340–352, 2017.72. Huawei Fan, Yafeng Wang, Kai Yang, and Xingang Wang. Enhancing network synchroniz-ability by strengthening a single node.
Physical Review E , 99(4–1):042305, 2019.73. Francesco Sorrentino, Louis M. Pecora, Aaron M. Hagerstrom, Thomas E. Murphy, andRajarshi Roy. Complete characterization of the stability of cluster synchronization in complexdynamical networks.
Science Advances , 2(4):e1501737, 2016.74. Tommaso Menara, Giacomo Baggio, Danielle S. Bassett, and Fabio Pasqualetti. Stabilityconditions for cluster synchronization in networks of heterogeneous Kuramoto oscillators.
IEEE Transactions on Control of Network Systems , 7(1):302–314, 2020.75. Antonio Ortega, Pascal Frossard, Jelena Kovaˇcevi´c, José M. F. Moura, and Pierre Van-dergheynst. Graph signal processing: Overview, challenges, and applications.
Proceedingsof the IEEE , 106(5):808 – 828, 2018.76. Weiyu Huang, Thomas A. W. Bolton, John D. Medaglia, Danielle S. Bassett, AlejandroRibeiro, and Dimitri Van De Ville. A graph signal processing perspective on functional brainimaging.
Proceedings of the IEEE , 106(5):868–885, 2018.77. Sandip Roy, Terry F. McElwain, and Yan Wan. A network control theory approach to model-ing and optimal control of zoonoses: case study of brucellosis transmission in sub-SaharanAfrica.
PLoS Neglected Tropical Diseases , 5(10):e1259, 2011.78. Ernest Barany, Steve Schaffer, Kevin Wedeward, and Steven Ball. Nonlinear controllabilityof singularly perturbed models of power flow networks.
IEEE Conference on Decision andControl , 5:4826–4832, 2004.79. Michael X. Henderson, Eli J. Cornblath, Adam Darwich, Bin Zhang, Hannah Brown,Ronald J. Gathagan, Raizel M. Sandler, Danielle S. Bassett, John Q. Trojanowski, andVirginia M. Y. Lee. Spread of α -synuclein pathology through the brain connectome is modu-lated by selective vulnerability and predicted by network analysis. Nature Neuroscience , 22(8):1248–1257, 2019.80. Boris C. Bernhardt, Fatemeh Fadaie, Min Liu, Benoit Caldairou, Shi Gu, Elizabeth Jefferies,Jonathan Smallwood, Danielle S. Bassett, Andrea Bernasconi, and Neda Bernasconi. Tem-poral lobe epilepsy: Hippocampal pathology modulates connectome topology and controlla-bility.
Neurology , 92(19):e2209–e2220, 2019.81. Emma K. Towlson, Petra E. Vertes, Gang Yan, Yee L. Chew, Denise S. Walker, William R.Schafer, and Albert L. Barabasi. Caenorhabditis elegans and the network control framework-FAQs.
Philosophical Transactions of the Royal Society of London. Series B, Biological Sci-ences. , 373(1758), 2018. et al.
2. Steven Schiff.
Neural Control Engineering . MIT Press, 2011.83. Jayson Jeganathan, Alistair Perry, Danielle S. Bassett, Gloria Roberts, Philip B. Mitchell,and Michael Breakspear. Fronto-limbic dysconnectivity leads to impaired brain network con-trollability in young people with bipolar disorder and those at high genetic risk.
NeuroimageClinical , 19:71–81, 2018.84. João P. Hespanha.
Linear systems theory . Princeton University Press, 2018.85. Vladimir G. Boltyanskii, Revaz V. Gamkrelidze, and Lev S. Pontryagin. The theory of optimalprocesses. I. The maximum principle, 1960.86. Siby J. Plathottam and Hossein Salehfar. Transient loss minimization in induction machinedrives using optimal control theory.
IEEE International Electric Machines & Drives Confer-ence , pages 1744–1780, 2015.87. Ali M. Sahlodin and Paul I. Barton. Efficient control discretization based on turnpike theoryfor dynamic optimization.
Processes , 5:85, 2017.88. Thomas J Boehme, Florian Held, Matthias Schultalbers, and Bernhard Lampe. Trip-basedenergy management for electric vehicles: An optimal control approach.
American ControlConference , 2013:5978–5983, 2013.89. Theodore L. Sourkes. On the energy cost of mental effort.
Journal of the History of theNeurosciences , 15(1):31–47, 2006.90. Adrian M. Heith and John W. Krakauer. The multiple effects of practice: skill, habit andreduced cognitive load.
Current Opinion in Behavioral Sciences , 20:196–201, 2018.91. Agnes Moors and Jan D. Houwer. Automaticity: A theoretical and conceptual analysis.
Psychological Bulletin , 132(2):297––326, 2006.92. Theresa M. Desrochers, Christopher H. Chatham, and David Badre. The necessity of ros-trolateral prefrontal cortex for higher-level sequential behavior.
Neuron , 87(6):1357–1368,2015.93. David Badre, Bradley B. Doll, Nicole M. Long, and Michael J. Frank. Rostrolateral prefrontalcortex and individual differences in uncertainty-driven exploration.
Neuron , 73(3):595–607,2012.94. William R. Schafer. The worm connectome: Back to the future.
Trends in Neurosciences ,41(11):763–765, 2018.95. Katharina Eichler, Feng Li, Ashok Litwin-Kumar, Youngser Park, Ingrid Andrade, Casey M.Schneider-Mizell, Timo Saumweber, Annina Huser, Claire Eschbach, Bertram Gerber,Richard D. Fetter, James W. Truman, Carey E. Priebe, L. F. Abbott, Andreas S. Thum, MartaZlatic, and Albert Cardona. The complete connectome of a learning and memory centre inan insect brain.
Nature , 548(7666):175–182, 2017.96. Elizabeth M. C. Hillman, Venkatakaushik Voleti, Wenze Li, and Hang Yu. Light-sheet mi-croscopy in neuroscience.
Annual Review of Neuroscience , 42:295–313, 2019.97. Anna L. Eberle, Olaf Selchow, Marlene Thaler, Dirk Zeidler, and Robert Kirmse. Mission(im)possible – mapping the brain becomes a reality.
Microscopy , 64(1):45–55, 2015.98. Daniel R. Berger, H. S. Seung, and Jeff W. Lichtman. VAST (Volume Annotation and Seg-mentation Tool): Efficient manual and semi-automatic labeling of large 3D image stacks.
Frontiers in Neural Circuits , 12:88, 2018.99. Gaia Tavoni, Simona Cocco, and Remi Monasson. Neural assemblies revealed by inferredconnectivity-based models of prefrontal cortex recordings.
Journal of Computational Neu-roscience , 41(3):269–293, 2016.100. Jonathan Schiefer, Alexander Niederbühl, Volker Pernice, Carolin Lennartz, Jürgen Hennig,Pierre LeVan, and Stefan Rotter. From correlation to causation: Estimating effective con-nectivity from zero-lag covariances of brain signals.
PLoS Computational Biology , 14(3):e1006056, 2018.101. Demian Battaglia, Annette Witt, Fred Wolf, and Theo Geisel. Dynamic effective connectivityof inter-areal brain circuits.
PLoS Computational Biology , 8(3):e1002438, 2012.102. Christopher J. Honey, Olaf Sporns, Leila Cammoun, Xavier Gigandet, Jean-Philippe Thiran,Reto Meuli, and Patric Hagmann. Predicting human resting-state functional connectivityfrom structural connectivity.
Proceedings of the National Academy of Sciences , 106(6):2035–2040, 2009.103. Alan Leshner and Donal W. Pfaff. Quantification of behavior.
Proceedings of the NationalAcademy of Sciences , 108(Supplement 3):15537–15541, 2011.104. Marie E. Bellet, Joachim Bellet, Hendrikje Nienborg, Ziad M. Hafed, and Phillip Berens.Human-level saccade detection performance using deep neural networks.
Journal of Neu-rophysiology , 121(2):646–661, 2019.105. Alexander Belyi, Iva Bojic, Stanislav Sobolevsky, Izabela Sitko, Bartosz Hawelka, LadaRudikova, Alexander Kurbatski, and Carlo Ratti. Global multi-layer network of human mobil-ity.
International Journal of Geographical Information Science , 31(7):1381–1402, 2017.106. Robert X.D. Hawkins, Noah D. Goodman, and Robert L. Goldstone. The emergence ofsocial norms and conventions.
Trends in Cognitive Sciences , 23(2):158–169, 2019.107. Peter Wills and Francois G. Meyer. Metrics for graph comparison: A practitioner’s guide. arXiv , 1412:2447, 2019.108. Benoit Mandelbrot.
The Fractal Geometry of Nature . W. H. Freeman and Company, 1983.109. Philip M. Iannaccone and Mustafa Khokha.
Fractal Geometry in Biological Systems . CRCPress, 1996.110. Thomas G. Smith, G D Lange, and William B. Marks. Fractal methods and results in cellularmorphology - dimensions, lacunarity and multifractals.
Journal of Neuroscience Methods ,69(2):123–136, 1996.111. Jing Z. Liu, Lu D. Zhang, and Guang H. Yue. Fractal dimension in human cerebellum mea-sured by magnetic resonance imaging.
Biophysical Journal , 85(6):4041–4046, 2003.112. Chaoming Song, Shlomo Havlin, and Hernán A. Makse. Self-similarity of complex networks.
Nature , 433(7024):392–395, 2005.113. Jian Li, Qian Du, and Caixin Sun. An improved box-counting method for image fractaldimension estimation.
Pattern Recognition , 42(11):2460–2469, 2009.114. Lior Rokach and Oded Z. Maimon. Clustering methods. In L Rokach and O Maimon, editors,
Data mining and knowledge discovery handbook , pages 321–352. Springer, 2005.115. Ad Aertsen and Hubert Preissl. Dynamics of activity and connectivity in physiological neu-ronal networks. In H G Schuster, editor,
Nonlinear dynamics and neuronal networks , pages281–302. VGH, 1991. 116. Sara McLaughlin Mitchell, Samantha Lange, and Holly Brus. Gendered Citation Patterns inInternational Relations Journals1.
International Studies Perspectives , 14(4):485–492, 2013.ISSN 1528-3577. . URL https://doi.org/10.1111/insp.12026.117. Michelle L. Dion, Jane Lawrence Sumner, and Sara McLaughlin Mitchell. Gendered CitationPatterns across Political Science and Social Science Methodology Fields.
Political Analysis
Nature Astronomy
International Organization
Nature Neuroscience , 2020. .121. Dale Zhou, Eli J. Cornblath, Jennifer Stiso, Erin G. Teich, Jordan D. Dworkin, Ann S. Blevins,and Danielle S. Bassett. Gender diversity statement and code notebook v1.0, 2020.
Szymula et al.
PNAS |
June 26, 2020 | vol. XXX | no. XX | UPPLEMENTARY MATERIAL
Saccade Information Trial RepresentativeSaccade Pattern PolygonRepresentation Saccade WaveformRepresentation b Saccade Information Trial RepresentativeSaccade Pattern PolygonRepresentation Saccade WaveformRepresentation c Saccade Information Trial RepresentativeSaccade Pattern PolygonRepresentation Saccade WaveformRepresentation d Saccade Information Saccade NetworkAdjacency Matrix Trial RepresentativeSaccade Pattern19 1 9 a MAX0
D(i)
Centroid Distance Calculation S a cc a d e W a v e f o r m R e p r e s e n t a t i o n D1 600
Supplemental Figure 1. Classification of Trial Representative Saccade Patterns . (a) Saccade information in the formof identified saccadic movements during a trial is collectively represented as an adjacency matrix, which in turn encodes adirected and weighted network. A total of nine nodes exist: one for every green target on the task grid. Edgeweights arecalculated as the number of times that a saccade is made from one node to another. The network is converted into a trialrepresentative saccade pattern by identifying the network cycle with the greatest sum of edge weights along its path. Eachtrial representative saccade pattern is treated as a 2-D polygon in the task grid space consisting of a set of (x,y)points. Thesaccade waveform is taken to be the vector of Euclidean distances between the polygon centroid and all of its points. Aone dimensional interpolation is performed to reduce each saccade waveform to 600 values. (b,c,d)
Example step-by-stepclassifications of saccade patterns from three randomly generated lists of saccades. Yellow targets denote the startingpoint of the saccade information, while the red target marks the ending point. Since all trial representative saccade patternsare loops, the start and end targets are the same. The red star located on the polygon representations of the saccadepatterns marks the centroid of the polygon. a r X i v : . [ q - b i o . N C ] J un UPPLEMENTARY MATERIAL
Grid Numbering Example Pattern
Example Numerical Representations (a) [1 3 6 9 5 8 7 1](b) [1 2 3 6 9 5 8 7 4 1](c) [1 3 9 5 8 7 1] [2 3 8 7 9 5 2 ][2 9 7 4 1 8 5 2 ][1 8 5 6 3 9 7 4 1 ][1 8 6 4 1 ] [2 3 9 8 7 4 1 5 2 ] [2 7 4 1 3 9 6 5 8 2 ][1 4 3 9 5 8 7 1 ][1 8 5 3 9 7 4 1 ][2 4 7 1 3 6 8 2 ][2 3 8 7 1 5 2 ][1 4 6 9 5 8 7 1 ][2 5 8 9 6 3 4 1 2 ][2 7 4 8 9 5 2 ][2 4 7 1 3 6 9 5 8 2 ][2 4 7 1 3 5 8 2 ][1 4 6 8 5 3 9 7 1 ][2 8 4 7 1 3 6 5 2 ][2 8 7 9 5 2 ][1 9 6 8 7 4 1 ][2 5 7 4 1 9 6 3 8 2 ][3 6 5 8 7 4 3 ][1 8 7 9 6 4 1 ][2 7 4 3 9 5 2 ][3 7 4 1 9 3 ][2 3 7 4 1 9 8 5 2 ][2 3 9 7 1 5 8 2 ][1 8 6 3 9 7 4 1 ][2 3 7 4 1 8 5 2 ][2 3 7 4 1 6 9 5 2 ][9 7 4 1 6 3 9 ][9 5 7 4 8 9 ][2 6 3 9 7 4 1 8 5 2 ][1 9 8 4 7 1 ][2 4 8 7 1 3 6 9 5 2 ][2 3 9 6 4 1 8 5 2 ][1 8 9 7 4 3 1 ][3 6 8 7 4 3 ][2 3 6 4 9 5 2 ][2 4 1 7 9 6 2 ][2 3 8 5 4 1 2 ] [8 5 7 4 1 3 9 6 8 ][8 7 3 9 5 8 ][2 7 1 4 8 6 3 5 2 ][1 8 9 6 3 4 1 ][2 8 4 1 3 5 2 ][2 3 4 1 6 9 8 5 2 ][3 5 7 9 3 ][2 4 1 3 5 8 2 ][3 9 4 1 8 6 3 ][2 3 5 8 7 1 4 9 6 2 ][2 4 3 6 9 5 2 ][2 4 1 6 9 8 5 2 ][2 3 8 7 1 6 9 5 2 ][2 3 7 9 6 8 5 2 ][1 8 7 4 3 9 5 1 ][2 7 4 6 9 8 5 2 ][9 7 4 8 5 6 9 ][1 6 9 7 1 ][3 6 8 5 7 4 3 ][2 3 9 6 7 4 1 8 5 2 ] [4 8 5 2 3 6 9 7 4 ] [2 6 5 8 7 4 1 3 2 ][2 8 4 3 9 5 2 ][2 8 7 4 1 9 2 ][5 7 4 1 8 6 3 5 ][1 8 5 9 6 7 4 1 ][6 8 4 7 1 3 9 6 ][2 8 7 4 1 6 5 2 ][1 3 4 8 5 7 1 ][2 4 1 7 6 9 5 8 2 ][2 4 1 6 9 3 5 2 ][2 3 9 4 1 7 6 8 5 2 ][4 8 5 2 7 9 6 4 ][3 6 5 8 2 7 4 9 1 3 ][2 7 4 1 9 6 8 5 2 ][2 3 6 7 9 5 8 2 ][2 9 7 8 6 3 4 1 5 2 ][2 4 1 3 9 7 6 8 5 2 ][9 7 4 1 5 2 3 8 9 ][2 7 4 8 5 2 ] [3 6 9 5 2 7 4 1 8 3 ][2 9 7 4 3 5 8 2 ][2 6 7 4 1 3 8 5 2 ] [1 3 9 8 7 4 1 ] [2 3 7 4 1 9 6 8 5 2 ][6 1 3 9 8 5 2 7 4 6 ] [5 8 5 ] [1 8 9 6 7 1 ][2 8 4 1 3 6 9 5 2 ] [2 3 9 7 4 1 8 2 ] [4 7 6 5 4 ][1 5 3 6 9 7 4 1 ][2 4 1 7 9 3 6 5 8 2 ][7 8 5 2 3 6 9 4 1 7 ][3 9 5 8 7 4 3 ][2 3 6 4 1 7 5 2 ][2 7 4 9 8 5 2 ][2 7 1 3 6 5 8 2 ][2 3 9 7 4 1 6 8 5 2 ][2 9 7 5 2 ][3 5 8 7 1 6 9 3 ][2 4 7 1 6 9 3 5 8 2 ][2 1 6 9 5 2 ][1 9 5 8 7 4 1 ][2 8 7 1 6 9 5 2 ][2 3 6 4 8 5 2 ][2 3 6 4 1 7 8 5 2 ][2 3 7 4 1 6 9 8 5 2 ][3 5 2 6 9 8 7 4 1 3 ][2 3 6 4 7 1 8 5 2 ][2 6 9 8 5 2 ][2 3 9 8 2 ][2 7 4 9 6 8 5 2 ][2 6 9 7 4 1 5 2 ][2 5 8 7 1 6 2 ][1 9 5 7 4 1 ][6 9 8 4 7 1 6 ][3 9 8 5 7 4 3 ][6 9 7 1 8 5 6 ][2 3 6 4 1 8 9 5 2 ] [2 8 7 4 1 3 2 ][2 4 7 9 6 8 5 2 ][2 4 1 3 6 9 5 8 2 ][1 8 6 9 5 7 4 1 ][1 8 5 7 9 6 4 1 ][2 3 6 7 1 8 5 2 ][1 6 8 7 4 1 ][1 8 3 6 9 5 7 4 1 ][7 4 9 5 2 8 7 ][7 9 7 ][2 5 8 4 6 9 7 1 2 ][2 6 3 9 8 5 2 ][2 3 9 5 6 4 1 8 2 ][2 8 9 5 7 4 1 3 6 2 ][1 8 5 7 9 6 3 4 1 ][2 7 4 1 8 5 2 ]
Supplemental Figure 2. Representative Cluster Saccade Patterns for Monkey G . The 136 identified saccade patternclusters exhibited by Monkey G are shown in their numerical representations. The diagram at the top demonstrates how asaccade pattern is converted into a numerical representation. Each target on the grid is labeled as a number 1-9. Thenumerical representation of a pattern then follows to be the numerical sequence of target indices listed in the order thatthey would be visited when tracing out the pattern. Each cluster numerical sequence identifies the saccade pattern whichwas most similar to all other patterns in its cluster. The five most prominent clusters are marked by red type.
UPPLEMENTARY MATERIAL
Grid Numbering Example Pattern
Example Numerical Representations (a) [1 3 6 9 5 8 7 1](b) [1 2 3 6 9 5 8 7 4 1](c) [1 3 9 5 8 7 1] [1 5 6 9 3 4 1 ][2 6 3 9 5 7 2 ][2 8 5 9 6 3 1 7 2 ][2 6 9 3 1 4 7 5 2 ][3 4 7 6 8 9 3 ][2 5 9 3 1 7 8 6 2 ][2 5 4 7 1 6 3 2 ][2 7 1 4 8 3 2 ][2 9 5 1 7 2 ][2 5 9 8 6 3 1 4 2 ][2 7 5 9 6 3 8 2 ][3 6 4 7 5 9 3 ][2 7 4 6 2 ][2 8 3 1 7 9 6 2 ][1 8 3 4 7 1 ][2 3 8 4 7 2 ][2 4 7 6 3 1 2 ][2 1 4 7 6 3 9 2 ][2 4 7 8 3 5 2 ][1 6 4 9 3 1 ][3 6 5 4 7 3 ][2 6 9 7 4 1 5 2 ][2 1 7 4 8 5 9 2 ][2 6 9 3 4 7 8 2 ][2 4 3 9 6 8 5 2 ][2 5 9 3 6 1 7 2 ][2 1 8 3 9 2 ][2 5 3 9 6 1 7 2 ][1 8 6 9 3 4 7 1 ][2 5 8 3 1 4 2 ][2 6 8 3 1 7 2 ][2 9 5 4 8 3 2 ][2 6 8 7 4 1 5 2 ][2 1 4 8 6 3 9 2 ][3 9 8 5 3 ][2 1 4 8 5 7 3 2 ][1 4 7 5 9 8 1 ][2 1 8 3 9 6 4 7 2 ][6 3 9 2 8 4 1 6 ][1 8 3 4 1 ][2 4 7 5 3 6 2 ][2 7 1 5 9 3 2 ] [2 4 7 6 3 5 2 ] [2 3 9 6 8 5 4 7 2 ][4 8 6 9 5 7 4 ][2 4 7 8 3 1 5 2 ][2 9 4 7 5 2 ][2 6 1 4 8 3 2 ][6 1 7 8 3 5 9 6 ][2 1 7 8 5 6 3 9 2 ] [2 1 8 3 4 7 2 ][2 8 5 6 9 3 1 7 2 ][2 5 9 6 8 1 4 7 2 ][2 3 5 8 1 4 7 2 ][2 3 1 8 2 ][2 9 4 7 8 2 ][2 9 4 7 6 3 2 ][1 6 3 9 4 1 ][2 6 3 4 1 7 5 2 ][1 8 3 6 4 1 ][4 1 5 9 8 2 6 3 4 ][2 1 8 5 9 6 3 4 7 2 ][2 1 4 7 5 8 3 9 6 2 ][5 8 4 7 2 6 9 3 1 5 ][2 9 6 3 8 5 4 7 2 ][1 8 3 9 6 4 7 5 1 ][2 1 8 3 6 2 ][2 1 5 9 3 6 4 7 2 ][1 5 4 7 2 6 3 1 ][5 9 6 2 4 7 8 5 ][2 9 3 4 5 2 ][1 5 8 3 1 ][1 7 8 5 9 6 4 1 ][2 6 9 3 4 7 2 ][2 8 9 6 1 7 2 ][2 1 5 8 3 6 4 7 2 ][2 6 3 8 5 1 4 7 2 ][8 6 9 3 4 8 ][2 1 6 3 8 4 7 2 ][2 6 3 1 5 7 4 8 2 ][3 1 5 2 4 7 6 9 3 ][2 1 4 7 8 3 5 2 ][3 4 8 5 6 3 ][2 6 3 9 7 1 8 2 ][2 1 7 6 9 2 ][5 8 3 6 4 7 5 ][4 6 9 2 8 5 1 4 ][2 9 3 6 4 7 2 ][2 1 4 7 5 9 6 8 2 ][4 7 2 1 6 4 ][2 7 1 8 9 3 2 ][1 7 9 3 6 1 ][3 9 3 ][1 8 4 3 5 1 ][2 6 4 7 1 5 8 3 2 ][2 3 1 6 8 5 2 ][1 3 9 6 8 5 7 1 ][2 6 8 9 3 1 7 2 ][1 5 7 4 8 3 1 ][4 7 2 5 8 9 6 4 ] [6 4 1 7 2 9 3 6 ][3 1 5 2 7 4 8 6 9 3 ][2 1 8 5 7 6 2 ][2 4 8 6 3 5 2 ][1 9 3 1 ][2 1 4 8 9 6 3 5 7 2 ][2 9 3 4 8 5 7 2 ][2 1 7 5 3 4 8 2 ][1 4 5 8 3 1 ][2 1 6 8 5 7 2 ][2 6 3 8 5 1 4 2 ][2 6 3 4 7 5 9 8 2 ][1 4 8 3 5 9 6 1 ][4 7 1 8 5 9 6 4 ][3 8 2 1 7 5 9 6 3 ][3 1 5 9 6 8 3 ][2 1 5 8 6 9 3 4 7 2 ][2 4 7 5 3 6 8 2 ][1 5 3 9 4 1 ][2 6 1 4 8 5 9 3 2 ][1 4 7 5 3 6 9 1 ][1 8 6 9 3 1 ][2 6 3 8 5 4 2 ][2 6 3 1 5 4 8 2 ][2 4 8 6 3 1 5 2 ][2 6 8 5 9 3 4 7 1 2 ][2 4 7 5 9 6 8 2 ][3 4 1 7 5 9 8 3 ][1 4 7 5 2 8 9 6 3 1 ][4 7 2 6 3 8 4 ][7 3 9 2 1 8 5 7 ][2 8 9 6 3 5 2 ][2 1 7 6 5 9 2 ][9 6 4 7 9 ][2 9 5 7 4 6 3 2 ][2 7 3 5 8 2 ][2 6 3 4 1 5 8 2 ][1 7 8 4 3 9 1 ][2 1 7 3 5 8 2 ][3 9 6 8 5 4 7 3 ][2 1 6 4 8 5 9 3 2 ][2 6 3 4 1 7 5 9 8 2 ][2 8 5 4 7 6 3 2 ][2 6 8 3 1 5 2 ][3 2 7 5 9 6 1 8 3 ][2 9 6 3 1 4 8 5 7 2 ][2 6 3 5 7 1 8 2 ][2 9 6 4 7 5 8 3 2 ][2 4 8 5 3 6 2 ][2 6 8 7 4 3 5 2 ] [4 8 2 1 7 6 3 4 ][2 6 3 4 7 1 5 2 ][2 5 8 6 4 7 2 ][8 3 4 7 5 9 6 8 ][1 5 2 4 7 3 1 ][1 7 9 6 1 ][2 6 3 8 9 5 1 7 2 ][4 8 2 1 7 5 9 6 3 4 ][2 1 5 3 4 7 2 ][2 6 3 1 7 4 8 5 2 ][2 6 1 7 5 9 2 ][2 6 3 4 7 5 9 2 ][4 7 2 9 3 4 ][5 9 6 8 4 7 1 5 ][5 7 2 6 3 4 1 5 ][2 3 8 5 9 6 4 7 2 ][4 1 5 2 6 3 8 7 4 ][2 6 8 5 4 2 ][2 9 3 4 1 8 2 ][2 6 8 2 ][2 1 7 3 4 8 6 2 ][2 1 6 4 8 5 9 2 ][2 6 3 1 5 8 2 ][3 6 8 5 7 9 3 ][6 2 8 3 1 7 6 ][1 5 9 6 4 8 3 1 ][2 6 3 4 8 5 7 2 ][8 5 9 4 8 ][2 9 5 3 4 8 2 ][2 3 1 4 8 5 9 2 ][2 9 6 4 7 5 8 2 ][1 4 8 5 9 7 3 1 ][2 6 1 8 9 2 ][2 6 3 1 4 8 5 7 2 ][5 9 3 8 2 6 4 1 5 ][2 1 4 8 6 5 9 3 2 ][4 8 3 5 2 6 4 ][4 7 6 2 1 8 5 9 4 ][2 1 4 7 6 9 5 8 2 ][1 7 2 6 3 5 4 8 1 ][2 8 6 7 5 9 2 ][2 6 8 5 9 4 7 2 ][5 9 2 1 4 7 6 3 8 5 ][2 6 8 5 7 3 2 ][2 8 6 9 3 4 5 2 ][2 1 7 6 9 4 8 2 ][2 1 5 4 7 2 ][2 1 5 9 3 4 7 6 8 2 ][3 1 5 2 4 7 6 8 3 ][2 8 9 3 1 6 2 ] [2 9 5 1 7 6 3 8 2 ][2 6 3 1 4 7 5 9 2 ][2 9 6 3 4 7 1 8 2 ][6 9 2 8 3 1 7 6 ][2 9 5 7 3 4 8 2 ][2 9 3 4 7 6 2 ][1 7 8 2 6 3 5 9 1 ][2 8 5 9 1 4 7 6 3 2 ][2 4 8 3 1 7 5 9 6 2 ][2 9 3 1 6 4 8 2 ][2 6 8 5 9 3 1 7 2 ][1 7 9 3 1 ][2 8 4 3 1 7 2 ][2 1 6 3 4 7 5 9 2 ][2 9 3 8 5 1 4 7 2 ][2 1 4 7 6 8 5 9 2 ][2 6 8 3 9 1 7 2 ][2 1 6 8 5 9 3 4 7 2 ][2 9 3 4 8 6 1 7 2 ][2 4 6 3 1 5 2 ][2 9 5 8 3 6 1 7 2 ] [2 5 3 6 2 ] [2 8 3 4 7 6 1 9 2 ][1 8 9 6 7 1 ][1 4 8 6 3 7 1 ][2 6 9 3 1 4 7 5 8 2 ][1 8 3 9 7 1 ][2 6 1 4 8 9 3 2 ][2 9 3 1 7 6 2 ][2 6 9 5 1 7 2 ][2 6 3 1 8 7 2 ][2 4 7 5 9 6 3 8 2 ] [6 3 4 1 5 9 6 ] [2 1 4 3 2 ][2 4 7 6 3 9 2 ][2 1 4 8 6 9 2 ][2 1 4 8 9 3 6 2 ][5 3 9 6 7 1 5 ][2 4 7 5 8 6 3 2 ][2 8 1 7 9 6 2 ][3 7 8 3 ][2 9 8 7 1 5 2 ][2 9 6 3 4 7 5 2 ][1 6 4 8 3 1 ][6 3 4 7 1 8 5 9 6 ][2 1 4 8 3 9 2 ][1 9 3 8 5 7 1 ][5 1 4 7 2 3 9 6 8 5 ][2 4 1 3 6 8 5 2 ][2 6 8 5 3 1 4 7 2 ] [2 9 6 4 7 5 2 ][1 5 2 6 7 4 1 ][3 5 4 8 3 ][2 1 4 8 3 5 9 2 ][4 8 5 2 3 7 4 ][1 8 5 3 2 9 6 4 7 1 ][2 9 6 4 7 1 8 2 ][2 6 1 7 8 2 ][2 6 3 4 7 5 2 ][1 5 6 3 4 1 ] [2 1 7 9 6 3 8 2 ] [1 5 2 4 7 6 3 1 ][2 5 9 6 7 4 2 ][2 1 8 3 4 2 ][2 1 7 3 5 9 6 2 ][2 1 8 6 3 4 7 2 ][2 9 6 8 5 1 4 7 2 ][2 4 7 3 9 6 8 5 2 ][1 5 9 6 8 3 4 1 ][1 8 3 5 1 ][1 7 3 5 1 ][2 4 1 5 9 8 3 2 ][1 4 7 5 9 6 8 1 ][2 6 3 4 1 5 9 2 ][2 1 6 9 3 8 5 4 7 2 ][1 4 7 2 5 9 6 8 3 1 ][3 8 5 7 2 9 6 3 ][2 1 5 4 7 3 6 9 8 2 ][5 8 7 4 2 6 3 1 5 ] [1 5 8 6 3 1 ] [2 3 9 6 1 7 5 2 ][3 9 6 4 7 3 ][1 8 9 6 4 1 ][2 1 4 8 6 3 5 7 2 ][2 3 6 8 5 4 7 2 ][3 1 4 7 5 2 6 9 8 3 ][1 7 8 3 5 1 ][2 1 7 3 6 2 ][2 4 7 3 5 2 ][2 5 8 3 4 7 2 ][2 1 6 3 4 8 2 ][1 7 9 3 5 1 ][5 8 6 3 5 ][3 1 8 4 3 ][2 6 8 5 4 7 2 ][3 1 4 5 9 3 ][2 6 3 4 7 8 5 2 ][1 8 6 9 3 4 7 5 1 ][2 1 5 9 3 6 4 8 2 ][2 1 9 3 4 7 2 ] [2 6 8 9 5 4 7 2 ][2 6 3 5 9 4 7 2 ][4 7 5 8 6 3 9 4 ][2 6 3 4 8 2 ][8 6 9 3 1 5 8 ][9 3 7 6 9 ][2 6 4 1 7 8 3 9 2 ][2 6 4 7 1 5 2 ][2 6 3 8 5 4 7 2 ][9 6 4 8 5 3 1 7 9 ][2 6 3 1 4 7 5 2 ][2 8 5 9 6 1 7 3 2 ][2 6 8 5 9 2 ][2 9 6 4 7 3 2 ][2 9 3 1 5 8 2 ][2 1 8 9 2 ][2 6 4 9 2 ][2 1 4 7 5 3 6 8 2 ][1 7 5 9 8 2 6 3 1 ][1 5 9 6 8 3 4 7 1 ][8 3 4 7 2 1 5 9 6 8 ][2 1 4 7 5 9 6 8 3 2 ][2 1 6 3 8 5 7 2 ][2 6 8 9 3 4 7 5 2 ][1 7 6 1 ][2 9 6 8 5 4 7 2 ][2 5 9 3 1 8 6 2 ][2 9 6 4 8 3 5 2 ][2 6 8 3 4 7 5 2 ][2 6 3 1 5 2 ][4 7 2 8 5 9 6 3 4 ][5 9 6 4 7 2 3 5 ][2 6 8 5 1 4 7 2 ][2 8 1 7 2 ][1 7 4 8 6 9 3 1 ][2 1 4 7 5 9 8 6 3 2 ][4 7 2 6 8 5 9 3 4 ][2 6 8 3 9 2 ][2 5 9 6 8 3 4 7 2 ][2 6 1 5 9 3 4 8 2 ][3 9 5 8 6 4 3 ][4 8 6 9 2 1 7 3 4 ][2 8 4 9 3 1 6 2 ][2 5 6 3 2 ][2 8 3 4 7 5 9 2 ][5 9 3 1 7 2 6 4 8 5 ]
Supplemental Figure 3. Representative Cluster Saccade Patterns for Monkey Y . The 346 identified saccade patternclusters exhibited by Monkey Y are shown in their numerical representations. The diagram at top demonstrates how asaccade pattern is converted into a numerical representation. Each target on the grid is labeled as a number 1-9. Thenumerical representation of a pattern then follows to be the numerical sequence of target indices listed in the order thatthey would be visited when tracing out the pattern. Each cluster numerical sequence identifies the saccade pattern whichwas most similar to all other patterns in its cluster. The five most prominent clusters are marked by red type.
UPPLEMENTARY MATERIAL a b S i m i l a r i t y F a c t o r MirrorImage
Supplemental Figure 4. Rotational Independence of the Similarity Factor . The similarity factor metric to comparesaccade patterns from trial to trial was designed to be independent of rotation. (a)
Performing circular shifts to the saccadewaveform is equivalent to rotation of the saccade polygon. A circular shift is a mathematical operation where a vector isrearranged such that the last element is moved to the first position and all other elements are shifted forward by one. Byperforming this operation l times, it is possible to shift the last l values to the front of the vector and all other values forwardby l positions. This relationship is depicted as the pattern rotates counter clockwise, the waveform shifts all elementsforward. In addition, the mirror image of the original polygon can be represented by flipping the original saccade waveformleft-to-right. The red-dashed line is meant to serve as a visual aid. (b) Ten arbitrary saccade patterns and their calculatedsimilarity matrix. The rotational independence of the measurement is evident as the value of similarity between pattern 8and patterns 3 and 8 is equal to 1 (the highest value). Note that the value of similarity between pattern 6 and 3 (directmirror images) is also equal to 1.
UPPLEMENTARY MATERIAL
10 20 30 40 50 60Session A v e r a g e S i m il a r i t y F a c t o r ( A S F ) Session A v e r a g e S i m il a r i t y F a c t o r ( A S F ) Monkey G Monkey Y A v e r a g e C o m p l e x i t y F a c t o r ( A C F ) A v e r a g e C o m p l e x i t y F a c t o r ( A C F ) S a cc a d e C l u s t e r E n t r o p y ( S C E ) S a cc a d e C l u s t e r E n t r o p y ( S C E ) abc Supplemental Figure 5. Saccade Metric Dynamics . (a) Dynamics of the average similarity factor across all sessions forMonkey G (Left) and Monkey Y (Right). Filled boundary areas represent +/- 1 standard deviation. (b)
Dynamics of theaverage complexity factor across all sessions for Monkey G (Left) and Monkey Y (Right). Filled boundary areas represent+/- 1 standard deviation. (c)
Dynamics of the cluster label entropy across all sessions for Monkey G (Left) and Monkey Y(Right).
UPPLEMENTARY MATERIAL A C E Time Horizon E n e r g y v s . S a cc a d e M e t r i c C o rr e l a t i o n Time HorizonASF vs ACE ACF vs ACE CLE vs ACE 0.5 1 1.5-0.50-0.2500.250.50 E n e r g y v s . S a cc a d e M e t r i c C o rr e l a t i o n Time HorizonASF vs ACE ACF vs ACE CLE vs ACE
Monkey G Monkey Y ab Time Horizon A C E Supplemental Figure 6. Control Energy Dynamics: Time Horizon Parameter Sweep . (a) Average control energydynamics across all sessions for Monkey G (left) and Monkey Y (right) using five different values of the time horizonparameter. Smaller time horizon values result in higher magnitude energy values and vice versa . The time horizon wasset to a value of 1 for all analysis in the main text. (b)
All three energy-behavior correlations were re-calculated using thecontrol energy derived from a range of time horizons (0.5 to 1.5 in intervals of 0.1). The change in each energy to behavior(average similarity factor, average complexity factor, and cluster label entropy) correlation is shown as a function of the timehorizon (Monkey G - Left; Monkey Y - Right).
UPPLEMENTARY MATERIAL
Inter-region Edge Knockout Intra-region Edge KnockoutOriginalNetwork Connectivity a Region ARegion BRegion C A C E A C E A C E ASFACFCLELesion Correlations (C L ) Null Lesion Correlations (C NL )Random 1000x b ce Correlation C o u n t C L General Null Distribution A C E A C E A C E ASFACFCLE d Null Lesion Distribution abs(C O - C NL ) C o u n t abs(C O - C L ) p lesion = ?p general = ? Supplemental Figure 7. Virtual Region Specific Lesion Analysis Guide . (a) Visual depiction of the lesion analysisworkflow. The lesion knockout consists of performing a series of edge-knockouts where (i) all edges between two regionsare set to zero in the effective connectivity matrix (inter-region), or where (ii) all edges connecting one region to itselfare set to zero (intra-region). (b)
For each region specific lesion, the effective connectivity matrix with region specificedges knocked out is used to compute the average control energy and its correlation ( C L ) to the saccade metrics. (c) Random lesions (1000x) were performed to ensure that the lesion-induced disruption of the observed correlations betweenaverage control energy and the behavioral metrics was specific to the lesion chosen, and not expected by lesioning thesame number of randomly chosen edges. For each random lesion the average control energy and its correlation ( C NL )to the saccade metrics was computed. (d) The first criterion to test whether the knockout edges were relevant to theobserved energy-behavior correlations, was that the region specific lesion resulted in a lesion correlation value, C L , thatwas not significantly different ( p > 0.05) from that obtained using the original permutation null model. The obtained p -valuefrom such a significance test is referred to as, p _value general . (e) For the second criterion, a null lesion distribution wascreated with each value being calculated as the abs ( C O − C NL ) , where C O is the observed energy-behavior correlationwithout any lesions. The significance of the region specific lesion disruption ( abs ( C O − C L ) ) of the observed correlationswas determined using a one-tailed test on the null lesion distribution with α = 0 . . The obtained p -value from such asignificance test is referred to as, p -value lesion . UPPLEMENTARY MATERIAL a A b s o l u t e C o rr e l a t i o n D i ff e r e n c e b A b s o l u t e C o rr e l a t i o n D i ff e r e n c e Supplemental Figure 8. Resilience of Energy-Behavior Correlations to Network Disruption . (a-b) Resilience ofenergy-behavior correlations as a function of number of edges randomly lesioned for Monkey G (a) and Monkey Y (b) .Every lesion was performed 100 times and each plot value is therefore the average absolute correlation difference. Theabsolute correlation difference was calculated as the difference between the observed energy-behavior correlation withoutany lesioning and the correlation after random edges were lesioned from the effective connectivity matrix. Red dashedlines denote the minimum threshold required to significantly disrupt the energy-behavior correlation such that its p -value isgreater than α = 0 .05