Modularity allows classification of human brain networks during music and speech perception
Melia E. Bonomo, Christof Karmonik, Anthony K. Brandt, J. Todd Frazier
MModularity allows classification of human brain networks duringmusic and speech perception
Melia E. Bonomo,
1, 2, ∗ Christof Karmonik,
3, 4, 5
Anthony K. Brandt, and J. Todd Frazier Department of Physics and Astronomy, Rice University, Houston, TX, 77005, USA Center for Theoretical Biological Physics, Rice University, Houston, TX, 77005, USA Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, 77030, USA MRI Core, Houston Methodist Research Institute, Houston, TX, 77030, USA Department of Radiology, Weill Cornell Medical College, New York, NY, 10065, USA Shepherd School of Music, Rice University, Houston, TX, 77005, USA
We investigate the use of modularity as a quantifier of whole-brain functional networks. Brainnetworks are constructed from functional magnetic resonance imaging while subjects listened toauditory pieces that varied in emotivity and cultural familiarity. The results of our analysis revealhigh and low modularity groups based on the network configuration during a subject’s favorite song,and this classification can predict network reconfiguration during the other auditory pieces. In par-ticular, subjects in the low modularity group show significant brain network reconfiguration duringboth familiar and unfamiliar pieces. In contrast, the high modularity brain networks appear morerobust and only exhibit significant changes during the unfamiliar music and speech. We also finddifferences in the stability of module composition for the two groups during each auditory piece. Ourresults suggest that the modularity of the whole-brain network plays a significant role in the way thenetwork reconfigures during varying auditory processing demands, and it may therefore contributeto individual differences in neuroplasticity capability during therapeutic music engagement.
Keywords: functional connectivity | network | modularity | music perception | human brain I. INTRODUCTION
Modular structure is pervasive in biology and plays animportant role in optimizing the functional capabilitiesof different systems [1, 2]. Broadly speaking, modular-ity is the degree to which the components of a complexsystem can be divided into distinct units, called mod-ules. The emergence of modules in biology appears tohave resulted from the selection for efficient structuresduring evolution in a dynamic environment [3]. Highermodularity is linked to optimized function and robust-ness to perturbation [3, 4], however lower modularity ismore advantageous over longer timescales, as it does notconstrain the system to a rigid configuration [5]. Theconcept of modularity has been valuable to studying bio-logical structure and function at various scales, includingmetabolic circuits [6], antibody immune response to in-fluenza [7], protein-protein interaction networks [8], eco-logical food webs [9], and human brain networks [10].In brain networks, nodes are typically defined by brainregions and edges can be based on either anatomical con-nections or relationships between the functional activityof different regions. Functional activity is a time seriessignal that can be acquired through various neuroimagingmodalities, such as functional magnetic resonance imag-ing (fMRI), while subjects are at rest or performing spe-cific tasks. Based on the architecture of the resultingnetwork, brain regions that are densely connected can begrouped into modules. Modularity quantifies the over-all community structure [11]. The composition of mod- ∗ [email protected]. ules has been used as a biomarker for illness [12], andamong healthy subjects, individual differences in modu-larity correlated with individual differences in cognitivetask performance [13–15]. Modularity has also been usedto quantify changes in the brain during learning [16] andto investigate organization of the functional network un-der specific demands, such as during visual tasks [17].The use of modularity to distinguish how individualsubjects’ brain networks reconfigure during varying taskdemands has not been widely explored. An importantapplication is to non-pharmaceutical cognitive interven-tions, such as music therapy [18], that are meant toenhance traditional medical treatments for patients ofneurological disease and trauma. Significant network re-configuration, quantifiable by the change in modularity,while subjects participate in these therapy enrichmentsmay have implications for encouraged neuroplasticity andcognitive recovery. The theoretical grounding of mod-ularity in biology may also shed light on why certainpatients are more receptive than others to music-basedinterventions [19] and why differences in the subtleties ofthe intervention, such as an auditory enrichment usingmusic versus speech [20], significantly affect outcomes.In this Rapid Communication, we investigate the mod-ularity of whole-brain functional connectivity networksfrom fMRI data while healthy subjects listened to audi-tory pieces that varied in emotivity and cultural familiar-ity. We also introduce a “super-module” analysis methodto study the consistency of module composition acrossdifferent auditory pieces. The degree of modular struc-ture in these networks during a subject’s self-selectedsong is shown to be predictive of how the network archi-tecture changes during familiar versus unfamiliar pieces. a r X i v : . [ q - b i o . N C ] S e p Namely, by classifying subjects into high and low modu-larity groups, we find that the low modularity networksexhibit significant adaptations during both familiar andunfamiliar music and speech; whereas the high modular-ity networks only significantly adapt during the unfamil-iar pieces. We also find that coordinated activity amongbrain regions associated with self-referential thoughts ismore consistent for the subjects in the high modularitygroup than it was for the low modularity group; whereasthe module of auditory processing brain regions was morestable for subjects in the low modularity group. Theseresults demonstrate the use of modularity as a viablequantifier of neural responses to music and speech. Thiswork paves the way for understanding the diversity ofresponses patients of neurological disease or trauma mayhave to auditory-based therapy enrichments.
II. METHODSA. fMRI Auditory Task
During fMRI, six auditory pieces from a pilot study[21, 22] were played for subjects (Table I). We refer thereader to the Supplemental Material [23] for details aboutthe cohort, fMRI acquisition, and fMRI pre-processing. a. Self-Selected Song (Self )
Participants each chosea song to which they felt a strong emotional attachment. b. Invention No. 1 (Bach)
This piano piece in Cmajor composed by J. S. Bach is representative of clas-sical music that is culturally familiar to the participantsin this study. It compromises sufficient rhythmic andmelodic variation to encourage engaged listening. c. Jussuiraku (Gagaku)
This instrumental gagakupiece from a Japanese opera in the oshiki-cho scale con-tains irregular rhythms, expressive noises, and deliberatedetuning, and it is meant to contrast the piece by Bach.Gagaku is classical Japanese court music that was cul-turally unfamiliar to the participants in this study. d. Xhosa Speech (Xhosa)
Xhosa is a tonal Bantulanguage spoken in South Africa that contains threetypes of percussive click sounds. The words and clicksare very distinct from sounds common to English andrelated languages, and therefore this speech excerpt isculturally unfamiliar to the participants. e. Newscast Reading (Cronkite)
This is a dry news-cast presented by Walter Cronkite in 1973 about poten-tial alien sightings. Cronkite delivers the report dispas-sionately using a standard broadcasting speech pattern. f. “The Great Dictator” Speech (Chaplin)
This isan emotionally-charged speech delivered by actor Char-lie Chaplin while impersonating a dictator in his politicalsatire film. The excerpt is meant to contrast Cronkite.
TABLE I. Number of subjects that listened to each auditorypiece overall and in the high and low modularity groups.
Total
Self Bach Gagaku Xhosa Cronkite ChaplinAll
24 24 15 13 11 10Low M self M self
15 15 7 7 5 3
B. Network construction
To construct functional activity networks, 84 Brod-mann area (BA) brain regions are used as nodes, andthe edges are determined by correlations in the activitybetween BAs during each auditory piece. The Pearsoncorrelation coefficient is computed between the time se-ries of each BA pair to generate a weighted connectivitymatrix for each subject listening to each auditory piece.The functional connectivity matrix is binarized to a net-work density of 11.5%, where the 400 edges with the high-est weights are projected to unity and all others set tozero. This density ensures the network is fully connectedyet sufficiently sparse to improve the signal-to-noise ratio[13, 15]. The resulting connectivity matrices are symmet-ric networks with unweighted, undirected edges.
C. Modularity analysis
We use Newman’s algorithm [11] as implemented in[15] to partition BAs into modules φ k , such that the ar-rangement maximizes modularity defined as M ( { φ } ) = 12 L X k X ij ∈ φ k (cid:0) A ij − a i a j L (cid:1) , (1)where L is the number of edges, A ij is the binarized con-nectivity matrix entry for BAs i and j , and a i is the de-gree of BA i . The inner sum is evaluated for all ij nodepairs in module φ k . The algorithm evaluates the mod-ularity of each distribution of nodes into modules { φ } against a null model, a i a j / L , such that the existence ofeach intramodule link is scaled by the probability that alink between nodes i and j would be expected in a ran-dom network with the same degree distribution. This isimportant because fluctuations in random networks havethe potential to produce high modularity values [24].To quantify the adaptability of the functional networkduring different auditory processing demands, we con-sider the modular architecture during Self as a subject-specific baseline. The amount the network architecturechanges during other auditory pieces then reflects theextent that listening to these other pieces perturbs thebrain from its baseline processing configuration. Changein modularity for each auditory piece n is calculated as∆ M n = M n − M Self , and the statistical significance of∆ M n is determined by computing p -values from one-sample, two-tailed t -tests using the Statistics and Ma-chine Learning Toolbox TM in MATLAB. For 17 of the24 subjects, M Self is either the highest or lowest of theauditory pieces that each of those subjects listened to,motivating the use of Self as a baseline network for cal-culating ∆ M n . Furthermore, the substantial subject-to-subject variation warrants the use of ∆ M n rather thanabsolute M values to compare the cohort results for dif-ferent auditory pieces. Namely, the average modularityover all pieces for each subject shows a substantial range,from M = 0 . ± .
02 to M = 0 . ± .
01 [23]. Sixteen ofthe 24 subjects have an average modularity that is sig-nificantly different at p < .
05 than at least one othersubject, and four subjects are significantly different at p < .
01 than at least one other subject, based on two-sample, two-tailed t -tests. D. Module composition analysis
To study which brain regions are being commonlygrouped together into modules, the functional connec-tivity matrices for all subjects are averaged together foreach of the six different auditory pieces. We keep thetop ≤
400 edges that are statistically significant as de-termined by a one-sample, two-tailed t -test for each edge.The average connectivity matrices are then binarized,and modularity is calculated with Eq. 1. This yields aset of modules { φ } for each of the six average networks.Newman’s algorithm arbitrarily assigns a label to eachmodule that it finds, and this label is not consistentacross the different networks, even if the module com-position appears qualitatively comparable. We thereforeintroduce a method to quantitatively compare modulesacross different networks. First, we determine the sim-ilarities in BA membership by calculating the Jaccardindex [25] between all pairs ( i = j ) of modules i and j across all six networks, J ( φ i , φ j ) = | N φ i ∩ N φ j || N φ i ∪ N φ j | , (2)where N φ i is the set of BA nodes in module φ i . A simi-larity measure of J = 1 refers to two modules in differentnetworks that have an identical node composition. J = 0means the two modules are either in the same network,or they are in different networks and do not have anynodes in common. Second, a set of super -modules { Φ } are determined using Eq. 1. Here, the network nodes aremodules φ i and the edges between each ij node pair arethe J ( φ i , φ j ) similarity coefficients. In other words, thecombined φ i modules across the six networks are groupedinto super-modules Φ k based on overlap in the φ i mod-ules’ sets of BA nodes, N φ i . The Φ k groupings are thenused to assign consistent labels to these modules that areanalogs across the networks of different auditory pieces.The φ i modules assigned to super-module k in auditorypiece n collectively become Φ kn , and the N φ i are thenamalgamated, such that N Φ kn is the total set of BAs insuper-module Φ kn . To quantify how stable the composition of each super-module Φ k is across all auditory pieces, we calculate P Φ k = P n = m J (Φ kn , Φ km )[ S ( S − / , (3)where S = 6 is the number of auditory pieces, and J (Φ kn , Φ km ) is the Jaccard index between auditorypieces n and m . P Φ k = 1 means that super-module Φ k has an identical set of BAs in all auditory pieces, whereas P Φ k = 0 means that Φ k is only present in one piece.High and low modularity group networks are createdby averaging the functional connectivity matrices for allapplicable subjects in that group for each auditory piece.This results in 12 average networks, with super-modulesΦ k determined from the aggregate φ i modules for all ofthese networks using the same method described above. A Chaplin Cronkite Bach Gagaku Xhosa-0.2-0.100.10.2 M odu l a r i t y * B Chaplin Cronkite Bach Gagaku Xhosa-0.2-0.100.10.2 M odu l a r i t y ** * * ** Low Modularity GroupHigh Modularity Group
FIG. 1. Change in modularity across auditory pieces for (A) the cohort and (B) subjects divided into two groups. Er-ror bars represent standard error. Single asterisks indicate p < .
05, double asterisks indicate p < .
01, and the daggerindicates a marginal significance of p = 0 . III. RESULTSA. Network Adaptability
The changes in modularity from Self to other auditorypieces are calculated for each subject individually andthen averaged over all subjects for each auditory piece.There is a statistically significant increase in modular-ity during Chaplin (Fig. 1A). This suggests that there issome universality to having higher modularity for speechcomprehension. Indeed, a recent study that quantifiedfunctional network changes as the brain adapted to aspeech listening task found that more successful listen-ing was correlated with subjects having higher modular-ity during the task than during a resting state [26]. Theincrease in modularity that we observe also appears to berelated to the emotive aspect of the Chaplin piece, sinceCronkite did not elicit the same response.As mentioned above, modularity is either at its high-est or lowest during Self for most subjects. We wereinterested in seeing if the null results for Cronkite, Bach,Gagaku, and Xhosa shown in Figure 1A were due to theeffects being cancelled out by these two different typesof subjects. To explore this, subjects are divided intolow and high modularity groups based on if their modu-larity during their self-selected piece was lower or higherthan the cohort average of M Self = 0 .
43. Table I showsthe number of subjects in each resulting group. Thechange in modularity is now averaged among subjectswithin each group (Fig. 1B). Subjects who have low mod-ularity during Self adapt their network architecture dur-ing familiar (Chaplin and Bach) and unfamiliar pieces(Xhosa), whereas subjects who have high modularityduring Self only significantly adapt during the unfamiliarpieces (Gagaku and Xhosa). Generalizing these results,they are in line with numerical experiments demonstrat-ing that high modularity networks are more robust toperturbation [4]. This has interesting implications for un-derstanding why the effects of auditory-based therapeu-tic interventions often vary strongly across patients [19],warranting future research. Patients with lower modu-larity during a favorite song may be more receptive toany type of music or auditory enrichment, whereas pa-tients with higher modularity may require unique andunfamiliar auditory stimuli to sufficiently perturb theirbrain networks and encourage neuroplasticity.
B. Module Composition
We compare the module composition for the cohortand for the low and high modularity groups across allauditory pieces and look at the module memberships ofthe BAs associated with the following functions: audi-tory processing [27], visual and mental imagery process-ing [27, 28], sensorimotor [29], emotion processing in thehippocampus [30] and temporal pole [31], and the de-fault mode network (DMN) [32, 33] (Fig. 2). Due tohigh subject-to-subject variation in edges when averagingbrain networks across subjects, part of the hippocampuswas often not assigned to a module. This low consistencyof the hippocampus module allegiance across subjects isin agreement with prior brain modularity work [34].Our analysis method described in Sec. II D identifiesthree super-modules (Fig. 3). Φ contains the auditoryprocessing BAs, Φ contains the visual processing BAs,and Φ contains BAs from the DMN. Individual BA mod-ule allegiance is listed in [23]. These super modules arefairly stable across all auditory pieces (Table II). The di-vision of brain regions into functionally significant mod-ules is in line with previous work that found that thetask-based modular organization of brain regions is con-sistent with the regions needed to complete the task [17]. z y SAGITTAL z x
CORONALAXIAL subject’s backsubject’s frontsubject’s left subject’s right xy SENSORIMOTOR: BA01, BA02, BA03, BA04VISUAL: BA17, BA18, BA19AUDITORY: BA22, BA41, BA42EMOTION: BA27, BA28, BA34, BA35, BA38DMN: BA08, BA09, BA10, BA21, BA23, BA24, BA28, BA29, BA30, BA31, BA32, BA36, BA39, BA40
FIG. 2. The locations of BAs for relevant functions and theorientations of brain networks presented in Figures 3 and 4.TABLE II. Stability of super-modules, P Φ k , across each setof average networks. Super-module
All Subjects Low M self High M self Φ Auditory . ± .
02 0 . ± .
03 0 . ± . Visual . ± .
02 0 . ± .
02 0 . ± . DMN . ± .
02 0 . ± .
03 0 . ± . Sensorimotor n/a 0 . ± .
03 0 . ± . Emotion n/a 0 . ± .
08 0 . ± . When dividing the subjects into low and high mod-ularity groups, we identify two additional, functionallysignificant super-modules (Fig. 4): Φ contains the BAsinvolved in sensorimotor function and Φ was character-ized by the emotion processing BAs. The more precisebreakdown reveals group-wise differences in the stabilityof super-modules (Table II). Namely, the DMN super-module (Φ ) was significantly more dynamic across thedifferent auditory pieces for the low modularity groupthan the high modularity group. The DMN characterizesa set of brain regions that are active during stimulus-independent thought and have been linked to autobio-graphical memory and prospection [32, 35]. The fact thatthis super-module is more intact for the high modularitygroup could point to differences in how much subjectsin the two groups engage in mind-wandering during thevarying auditory task demands. In addition, the auditorysuper-module (Φ ) was moderately more stable acrossthe different pieces for the low modularity group. It isinteresting that while this group’s community structure isoverall more dynamic regardless of the familiarity of thestimulus (Fig. 1B), on average there is this core auditoryprocessing module. In the context of prior experimentsand theory showing that lower modularity networks arebetter suited for performing complex tasks (i.e., requir-ing multiple types of cognitive functions) whereas highermodularity networks are more beneficial for fast response Self Chaplin CronkiteBach Gagaku Xhosa
FIG. 3. Functional connectivity matrices averaged over all subjects during each auditory piece and brain networks oriented asin Fig. 2. The super-modules across all pieces are characterized by the auditory processing BAs (Φ , blue), visual processingBAs (Φ , red), and BAs in the DMN (Φ , purple). Intermodule edges and BAs not assigned to a module are colored black. to straightforward tasks [13–15], our results here maysuggest that the low modularity group is optimizing bothproperties. That is, the low modularity group retainshigh fidelity of the Φ super-module for efficient process-ing of basic auditory features, but the overall networkhas high adaptability to process the additional cognitivecomponents of the stimulus (e.g., familiarity, emotion,self-referential thoughts, memory). IV. CONCLUSION
In summary, we investigated the dynamic, whole-brain networks of subjects listening to music and speechthrough the lens of modularity. While many task-basedneuroimaging studies focus on interpreting brain activa-tions in the specific functional regions of interest (e.g.,only those in the auditory cortex), whole-brain methodsare poised to investigate how those activations fit intothe larger context of the brain’s comprehension of (au-ditory) information [36]. Furthermore, though a batteryof graph theoretical measures are often used to quantifyfunctional networks, modularity is a particularly elegantmeasure that has a biophysical grounding to study whatdrives a particular network reorganization [2]. We haveshown that baseline modularity and the familiarity of thestimulus both played a role in (1) the extent to whichthe brain network was perturbed, and (2) which groups of BAs across the whole-brain exhibited coordinated ac-tivity during the duration of the auditory pieces. Eventhough we had a unitary, healthy population, our workhighlighted the importance of considering results on amore individual level, as only considering the results forthe cohort together averaged out the interesting group-wise differences. The trends seen for individuals withhigher or lower modularity during their self-selected mu-sical piece provided insight into the diversity of musicand speech perception among people that might explainwhy the effect of a music intervention can vary stronglyacross individual patients. By demonstrating modularityas a quantifier of an individual’s “fingerprint” [37] dur-ing general auditory processing and of the dynamic re-organization of the functional connectivity network dur-ing music and speech perception, this work may informauditory-based interventions for patients of neurologicaldisease and trauma.
ACKNOWLEDGMENTS
The authors thank M. W. Deem for helpful discussionsabout the theory of this paper. This work was supportedby the Center for Theoretical Biological Physics at RiceUniversity (National Science Foundation, PHY 1427654),the Ting Tsung and Wei Fong Chao Foundation, and theHouston Methodist Center for Performing Arts Medicine. [1] L. H. Hartwell, J. J. Hopfield, S. Leibler, and A. W.Murray, Nature , C47 (1999).[2] D. M. Lorenz, A. Jeng, and M. W. Deem, Physics of LifeReviews , 129 (2011).[3] J. Sun and M. W. Deem, Physical Review Letters ,228107 (2007).[4] E. A. Variano, J. H. McCoy, and H. Lipson, PhysicalReview Letters , 188701 (2004).[5] J.-M. Park, L. R. Niestemski, and M. W. Deem, PhysicalReview E , 012714 (2015).[6] E. Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai,and A.-L. Barab´asi, Science , 1551 (2002).[7] M. E. Bonomo, R. Y. Kim, and M. W. Deem, Vaccine , 3154 (2019).[8] A. Mihalik and P. Csermely, PLoS Computational Biol-ogy , e1002187 (2011).[9] A. E. Krause, K. A. Frank, D. M. Mason, R. E. Ulanow-icz, and W. W. Taylor, Nature , 282 (2003).[10] O. Sporns and R. F. Betzel, Annual Review of Psychology , 613 (2016).[11] M. E. Newman, Proceedings of the National Academy ofSciences USA , 8577 (2006).[12] M. Chavez, M. Valencia, V. Navarro, V. Latora, andJ. Martinerie, Physical Review Letters , 118701(2010).[13] Q. Yue, R. C. Martin, S. Fischer-Baum, A. I. Ramos-Nu˜nez, F. Ye, and M. W. Deem, Journal of CognitiveNeuroscience , 1532 (2017).[14] A. V. Lebedev, J. Nilsson, and M. L¨ovd´en, Journal ofCognitive Neuroscience , 1033 (2018).[15] M. Chen and M. W. Deem, Physical Biology , 016009(2015).[16] D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha,J. M. Carlson, and S. T. Grafton, Proceedings of theNational Academy of Sciences USA , 7641 (2011).[17] Z. Zhuo, S.-M. Cai, Z.-Q. Fu, and J. Zhang, PhysicalReview E , 031923 (2011).[18] H. L. Stuckey and J. Nobel, American Journal of PublicHealth , 254 (2010).[19] T. Grimm and G. Kreutz, Brain Injury , 704 (2018).[20] T. S¨ark¨am¨o, M. Tervaniemi, S. Laitinen, A. Forsblom,S. Soinila, M. Mikkonen, T. Autti, H. M. Silvennoinen, J. Erkkil¨a, M. Laine, et al. , Brain , 866 (2008).[21] C. Karmonik, A. Brandt, J. R. Anderson, F. Brooks,J. Lytle, E. Silverman, and J. T. Frazier, Brain Connec-tivity , 632 (2016).[22] C. Karmonik, A. Brandt, S. Elias, J. Townsend, E. Sil-verman, Z. Shi, and J. T. Frazier, International Jour-nal of Computer Assisted Radiology and Surgery , 703(2020).[23] “See supplemental material at [url will be inserted bypublisher] for more details of methods and further rele-vant results.”.[24] R. Guimera, M. Sales-Pardo, and L. A. N. Amaral, Phys-ical Review E , 025101(R) (2004).[25] R. Real, Miscellania Zoologica , 29 (1999).[26] M. Alavash, S. Tune, and J. Obleser, Proceedings of theNational Academy of Sciences USA , 660 (2019).[27] K. Zilles and K. Amunts, in The Human Nervous System (Elsevier Amsterdam, 2012) pp. 836–895.[28] G. Ganis, W. L. Thompson, and S. M. Kosslyn, Cogni-tive Brain Research , 226 (2004).[29] J. H. Kaas, in The Human Nervous System (Elsevier Am-sterdam, 2012) pp. 1059–1092.[30] S. Geyer and R. Turner,
Microstructural Parcellation ofthe Human Cerebral Cortex (Springer Science & BusinessMedia, 2013).[31] I. R. Olson, A. Plotzker, and Y. Ezzyat, Brain , 1718(2007).[32] M. E. Raichle, A. M. MacLeod, A. Z. Snyder, W. J. Pow-ers, D. A. Gusnard, and G. L. Shulman, Proceedings ofthe National Academy of Sciences , 676 (2001).[33] R. W. Thatcher, D. M. North, and C. J. Biver, Frontiersin Human Neuroscience , 529 (2014).[34] R. W. Wilkins, D. A. Hodges, P. J. Laurienti, M. Steen,and J. H. Burdette, Scientific Reports , 6130 (2014).[35] R. N. Spreng and C. L. Grady, Journal of Cognitive Neu-roscience , 1112 (2010).[36] L. de Wit, D. Alexander, V. Ekroll, and J. Wagemans,Psychonomic Bulletin & Review , 1415 (2016).[37] E. S. Finn, X. Shen, D. Scheinost, M. D. Rosenberg,J. Huang, M. M. Chun, X. Papademetris, and R. T.Constable, Nature Neuroscience , 1664 (2015). A Self Chaplin CronkiteBach Gagaku Xhosa
B Self Chaplin CronkiteBach Gagaku Xhosa
FIG. 4. Functional connectivity matrices averaged over all subjects in the (A) low and (B) high modularity groups duringeach auditory piece and brain networks oriented as in Fig. 2. The super-modules across all auditory pieces are characterizedby BAs implicated in auditory processing (Φ , blue), visual processing (Φ , red), the DMN (Φ , purple), sensorimotor function(Φ , yellow), and emotion processing (Φ , green). Intermodule edges and BAs not assigned to a module are colored black. upplemental Material Modularity allows classification of human brain networksduring music and speech perception
Melia E. Bonomo, Christof Karmonik, Anthony K. Brandt, J. Todd Frazier
I. Additional Methods
Participants.
The study protocol was approved by the Houston Methodist Hospital Institutional ReviewBoard, and all participants gave informed consent. Twenty-five healthy volunteers betweenthe ages of 18 and 82 were recruited from the Houston community to participate in thisstudy. Data for the first 12 subjects were previously collected during a pilot study [1, 2].Participants were not taking any chronic medication or psychoactive drugs. There was aheterogeneous distribution of gender, age, and extent of music education to avoid biasing toany of these factors. Due to technical difficulty, data from one participant were excluded inthe analysis.
MRI Acquisition.
Neuroimaging took place at the Houston Methodist Research Institute MRI core us-ing a Philips Ingenia 3.0T scanner. Anatomical scans were acquired with a turbo fieldecho pulse sequence at an 8.2ms repetition time and 3.8ms echo time (field of view of24 x 24 x 16.5cm, 1.0mm isotropic resolution, axial orientation). Functional scans were ac-quired in T2* weighted slices with an echo planar imaging pulse sequence at a 2400ms repe-tition time and 35ms echo time (field of view: 22 x 22 x 12cm, resolution: 1.5 x 1.5 x 3.0mm,axial orientation). The functional imaging was obtained while subjects listened to each au-ditory piece through headphones in the scanner bed. High frequencies were increased duringplayback of each audio track using the iTunes digital equalizer to account for attenuation ofthese tones in the air tubing used to connect to the headphones. The listening task followeda standard block design, in which there was silence for 10 brain volumes (24s), followedby 12 blocks alternating 10-volume intervals of auditory stimulus and silence, for a totalof 130 volumes (312s) in each run (see Figure S1). The order of pieces played was Self,Bach, Gagaku, Xhosa, Cronkite, and Chaplin. The Self songs were downloaded from iTunes(Apple Inc). The number of pieces that each subject listened to was dependent on how longthey were comfortable staying in the scanner.
MRI Pre-Processing.
The MRI data underwent standard pre-processing in AFNI [3] for alignment of theanatomical and functional scans, motion correction, spatial smoothing, and bandpass fil-tering of the blood oxygen level-dependent (BOLD) signal to remove constant offset andhigh-frequencies. The AFNI software was also used to transform the data into Talairachspace and reconstruct the whole-brain signal into 84 Brodmann areas (BAs), in which thetime series were averaged over all voxels segmented into each BA. Previous work has shownconsistency in modularity trends across different parcellation atlases [4]. The first 24s ofsilence during each run were not included in the analysis.1 a r X i v : . [ q - b i o . N C ] S e p A) fMRI Data Collection z yz xxy
Subject 24
Anatomical scan
Functional scan collected while subject listens to Self via block design o ff on 288s24s (B) Whole-Brain Parcellation (C) Functional Network Construction(D) Functional Network Binarization xy Subject 24 | Self
Network link weight threshold = 0.74 S i gna l All BA - Subject 3 - Music: Bach - M = 0.49
84 BA time series
10 20 30 40 50 60 70 801020304050607080
Subject 12
Subject 24 | Self | M = 0.58 BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA01LBA01RBA02LBA02RBA03LBA03RBA04LBA04RBA05LBA05RBA06LBA06RBA07LBA07RBA08LBA08RBA09LBA09RBA10LBA10RBA11LBA11RBA13LBA13RBA17LBA17RBA18LBA18RBA19LBA19RBA20LBA20RBA21LBA21RBA22LBA22RBA23LBA23RBA24LBA24RBA25LBA25RBA27LBA27RBA28LBA28RBA29LBA29RBA30LBA30RBA31LBA31RBA32LBA32RBA33LBA33RBA34LBA34RBA35LBA35RBA36LBA36RBA37LBA37RBA38LBA38RBA39LBA39RBA40LBA40RBA41LBA41RBA42LBA42RBA43LBA43RBA44LBA44RBA45LBA45RBA46LBA46RBA47LBA47R
BA22LBA41L Subject 24 | Self | M = 0.58 BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA01LBA01RBA02LBA02RBA03LBA03RBA04LBA04RBA05LBA05RBA06LBA06RBA07LBA07RBA08LBA08RBA09LBA09RBA10LBA10RBA11LBA11RBA13LBA13RBA17LBA17RBA18LBA18RBA19LBA19RBA20LBA20RBA21LBA21RBA22LBA22RBA23LBA23RBA24LBA24RBA25LBA25RBA27LBA27RBA28LBA28RBA29LBA29RBA30LBA30RBA31LBA31RBA32LBA32RBA33LBA33RBA34LBA34RBA35LBA35RBA36LBA36RBA37LBA37RBA38LBA38RBA39LBA39RBA40LBA40RBA41LBA41RBA42LBA42RBA43LBA43RBA44LBA44RBA45LBA45RBA46LBA46RBA47LBA47R
BA22LBA41L
Subject 24 | Self | M = 0.58 BA BA R BA BA R BA BA R BA BA R BA BA R BA R BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA BA R BA BA R BA BA R BA BA BA BA R BA R BA R BA R BA BA BA BA R BA R BA BA BA R BA BA R BA BA R BA BA R BA R BA R BA BA BA R BA R BA R BA BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA R BA BA BA01LBA01RBA02LBA02RBA03LBA03RBA04LBA04RBA05LBA05RBA06RBA24RBA40LBA40RBA07LBA07RBA17LBA17RBA18LBA18RBA19LBA19RBA23LBA23RBA29LBA29RBA30LBA30RBA31LBA31RBA36LBA37LBA37RBA39LBA39RBA20LBA20RBA21LBA22LBA28LBA28RBA34RBA35RBA36RBA38LBA41LBA42LBA42RBA43RBA06LBA08LBA08RBA09LBA09RBA10LBA10RBA13LBA13RBA21RBA22RBA24LBA32LBA32RBA38RBA41RBA43LBA44LBA44RBA45LBA45RBA46LBA46RBA47LBA47RBA11LBA11RBA25LBA25RBA27LBA27RBA33LBA33RBA34LBA35L
BA22LBA41L z y (E) Modularity Calculations
Subject 24 | Self
M = 0.58 4 modules z x
Subject 24 | Bach
M = 0.56 ∆ M = -0.02
FIG. S1. Protocol followed for processing MRI data. Analysis of Subject 12 listening to Bachis shown as an example. (A)
The anatomical MRI scan is collected and shown here using theradiological convention, where − x is right, + x is left, − y is anterior, + y is posterior, − z is inferior,and + z is superior. The functional scan is collected during a 312s run that follows a block design foreach auditory piece. The anatomical and functional scans are aligned to obtain the BOLD signalfrom each 3mm voxel over the whole brain. (B) The whole brain is parcellated into 84 BA regions,and the BOLD signal is averaged over all voxels within each region. The functional activity ofeach BA over time is shown here, where the green shaded bars indicate when the auditory stimuluswas on. (C)
A functional connectivity matrix of 84x84 BAs is generated by calculating pairwisecorrelations between all time series. The matrix axes are ordered BA01L, BA01R, BA02L, BA02R,etc. for left (L) and right (R) hemisphere BAs. The correlation between the signal in BA41L andBA22L, both involved in auditory processing, is highlighted as an example. A complete list ofBAs used in this analysis is provided in Tables S1, S2, and S3. (D)
The top 11.5% of edges of thefunctional connectivity matrix are set to 1 and all other edges are set to 0. In this example, keepingthe top 11.5% of edges meant setting a correlation coefficient threshold of 0.74. The binarizedmatrix axes are ordered as in C . (E) Modularity is calculated using Newman’s algorithm. Thefunctional connectivity matrix entries are rearranged here to visualize the BA composition of eachof four modules. To visualize the network, BA network node coordinates are extracted from AFNI,and edges are constructed from the binarized connectivity matrix. Intra-module connections andnodes are color-coded by module, and inter-module connections are black. I. Modularity Results for Individual SubjectsA
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Subjects M odu l a r i t y SelfChaplinCronkiteBachGagakuXhosa B
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Subjects M odu l a r i t y FIG. S2. Differences in modularity among individual subjects. (A)
Modularity for each subjectas they listened to each auditory piece. Subjects are ordered from low to high average modularity.Nine of the 24 subjects exhibit decreased network modularity during all of the other auditory piecesfrom what it was during Self, and eight exhibit increased modularity during all of the other auditorypieces. (B)
Average modularity over all auditory pieces for each subject. Subjects are orderedas in A . Error bars are standard error. The daggers indicate p < .
05 between that subject andat least one other subject. The double asterisks indicate p < .
01 between the specified subjects.Statistics are computed based on two-sample, two-tailed t -tests. II. Module Membership of Brodmann Areas
Tables S1, S2, S3 show the super-module assignments for each BA during each auditorypiece for the average networks created with all subjects, low modularity subjects, and highmodularity subjects, respectively. BAs are ordered based on their structure number. Thismethod is generally able to place small modules that are highly isolated in one network intoan appropriate super-module for cross-network analyses; however, NaN means that therewere no links connected to that BA when the average network was created and/or a super-module could not be assigned. The BAs most often not assigned to a module due to highsubject-to-subject variation in edges are the orbital frontal cortex (BA 11), the anteriorcingulate (BA 33), and part of the hippocampus (BA 27)
TABLE S1: Modules for Average Networks of All Subjects
Brodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa
BA01L Primary somatosensory cortex 2 2 3 3 2 2BA01R Primary somatosensory cortex 2 2 3 3 2 2BA02L Secondary somatosensory cortex 2 2 3 2 2 2BA02R Secondary somatosensory cortex 2 2 3 3 2 2BA03L Tertiary somatosensory cortex 2 2 2 2 2 2BA03R Tertiary somatosensory cortex 2 2 3 3 2 2BA04L Primary motor cortex 2 2 2 2 2 2BA04R Primary motor cortex 2 2 3 3 2 2BA05L Superior parietal sulcus 2 2 2 2 2 2BA05R Superior parietal sulcus 2 2 2 2 2 2BA06L Supplementary motor area 2 3 3 3 3 2BA06R Supplementary motor area 1 3 3 3 3 2BA07L Superior parietal gyrus 2 2 2 2 2 2BA07R Superior parietal gyrus 2 2 2 2 2 2BA08L Pre-supplementary motor area 3 3 3 3 3 3BA08R Pre-supplementary motor area 3 3 3 3 3 3BA09L Dorsolateral prefrontal cortex 3 3 3 3 3 3BA09R Dorsolateral prefrontal cortex 3 3 3 3 3 3BA10L Fronto-parietal cortex 3 2 3 3 3 2BA10R Fronto-parietal cortex 3 3 3 3 3 2BA11L Orbital frontal cortex NaN NaN NaN NaN NaN NaNBA11R Orbital frontal cortex NaN NaN NaN NaN NaN NaNBA13L Insula 1 1 1 1 1 1BA13R Insula 1 1 1 1 1 1BA17L Primary visual cortex 2 2 2 2 2 2BA17R Primary visual cortex 2 2 2 2 2 2BA18L Secondary visual cortex 2 2 2 2 2 2BA18R Secondary visual cortex 2 2 2 2 2 2BA19L Cuneus 2 2 2 2 2 2BA19R Cuneus 2 2 2 2 2 2BA20L Inferior temporal gyrus 3 3 3 3 3 3 rodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa BA20R Inferior temporal gyrus 3 3 3 3 3 3BA21L Medial temporal gyrus 3 3 3 3 3 3BA21R Medial temporal gyrus 3 1 3 3 3 1BA22L Superior temporal gyrus 1 1 1 1 1 1BA22R Superior temporal gyrus 1 1 1 1 1 1BA23L Posterior cingulate cortex1 2 2 2 2 2 2BA23R Posterior cingulate cortex1 2 2 2 2 2 2BA24L Dorsal anterior cingulate cortex 2 3 3 3 3 2BA24R Dorsal anterior cingulate cortex 2 3 3 3 3 3BA25L Subgenual anterior cingulate cortex 3 3 3 3 3 3BA25R Subgenual anterior cingulate cortex 3 3 3 3 3 3BA27L Parahippocampal gyrus1 NaN 2 2 NaN 2 NaNBA27R Parahippocampal gyrus1 NaN NaN NaN NaN NaN NaNBA28L Hippocampal area1 3 3 3 3 3 3BA28R Hippocampal area1 3 3 3 3 3 3BA29L Retrosplenial cortex1 2 2 2 2 2 2BA29R Retrosplenial cortex1 2 2 2 2 2 2BA30L Retrosplenial cortex2 2 2 2 2 2 2BA30R Retrosplenial cortex2 2 2 2 2 2 2BA31L Posterior cingulate cortex2 2 2 2 2 2 2BA31R Posterior cingulate cortex2 2 2 2 2 2 2BA32L Pregenual anterior cingulate cortex 3 3 3 3 3 3BA32R Pregenual anterior cingulate cortex 3 3 3 3 3 3BA33L Rostral anterior cingulate cortex NaN NaN NaN NaN NaN NaNBA33R Rostral anterior cingulate cortex NaN NaN NaN NaN NaN NaNBA34L Hippocampus 3 3 3 3 3 3BA34R Hippocampus 3 3 3 3 3 3BA35L Hippocampal area2 3 3 3 3 3 3BA35R Hippocampal area2 3 3 3 3 3 3BA36L Parahippocampal gyrus2 3 3 3 3 3 3BA36R Parahippocampal gyrus2 3 3 3 3 2 3BA37L Occipital-temporal cortex 2 2 2 2 2 2BA37R Occipital-temporal cortex 2 2 2 2 2 2BA38L Temporal pole 3 3 3 3 3 3BA38R Temporal pole 3 3 3 3 3 3BA39L Angular gyrus 2 2 2 2 2 2BA39R Angular gyrus 2 2 2 2 2 2BA40L Intra-parietal sulcus 2 2 3 2 2 2BA40R Intra-parietal sulcus 2 2 3 2 2 2BA41L Primary auditory cortex 1 1 1 1 1 1BA41R Primary auditory cortex 1 1 1 1 1 1BA42L Secondary auditory cortex 1 1 1 1 1 1BA42R Secondary auditory cortex 1 1 1 1 1 1BA43L Postcentral gyrus 1 1 1 1 1 1BA43R Postcentral gyrus 1 1 1 1 1 1 rodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa BA44L Opercular part of inferior frontal gyrus 1 1 3 1 3 1BA44R Opercular part of inferior frontal gyrus 1 1 1 1 3 1BA45L Inferior frontal gyrus 3 1 3 3 3 1BA45R Inferior frontal gyrus 3 1 1 3 3 1BA46L Medial prefrontal cortex 3 3 3 3 3 3BA46R Medial prefrontal cortex 3 3 3 3 3 3BA47L Ventro-lateral prefrontal cortex 3 3 3 3 3 3BA47R Ventro-lateral prefrontal cortex 3 3 3 3 3 3TABLE S2: Modules for Low Modularity Group
Brodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa
BA01L Primary somatosensory cortex 4 4 4 4 4 4BA01R Primary somatosensory cortex 4 4 4 4 4 4BA02L Secondary somatosensory cortex 4 4 4 4 4 4BA02R Secondary somatosensory cortex 4 4 4 4 4 4BA03L Tertiary somatosensory cortex 4 4 4 4 4 4BA03R Tertiary somatosensory cortex 4 4 4 4 4 4BA04L Primary motor cortex 4 4 2 4 2 4BA04R Primary motor cortex 4 4 4 4 4 4BA05L Superior parietal sulcus 2 4 2 2 2 4BA05R Superior parietal sulcus 4 4 4 4 4 4BA06L Supplementary motor area 4 4 4 4 3 4BA06R Supplementary motor area 4 4 4 4 3 4BA07L Superior parietal gyrus 2 2 2 2 2 2BA07R Superior parietal gyrus 2 2 2 2 2 2BA08L Pre-supplementary motor area 3 4 3 3 3 4BA08R Pre-supplementary motor area 3 4 3 3 3 4BA09L Dorsolateral prefrontal cortex 3 4 3 3 3 4BA09R Dorsolateral prefrontal cortex 3 4 3 3 3 4BA10L Fronto-parietal cortex 3 2 4 3 3 3BA10R Fronto-parietal cortex 3 2 4 3 3 3BA11L Orbital frontal cortex NaN NaN NaN NaN NaN NaNBA11R Orbital frontal cortex NaN NaN NaN NaN NaN NaNBA13L Insula 1 1 1 1 1 1BA13R Insula 1 1 1 1 1 1BA17L Primary visual cortex 2 2 2 2 2 2BA17R Primary visual cortex 2 2 2 2 2 2BA18L Secondary visual cortex 2 2 2 2 2 2BA18R Secondary visual cortex 2 2 2 2 2 2BA19L Cuneus 2 2 2 2 2 2BA19R Cuneus 2 2 2 2 2 2BA20L Inferior temporal gyrus 5 5 3 3 NaN 5 rodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa BA20R Inferior temporal gyrus 5 4 3 3 NaN 2BA21L Medial temporal gyrus 1 5 3 3 NaN 5BA21R Medial temporal gyrus 1 1 3 3 NaN NaNBA22L Superior temporal gyrus 1 1 1 1 1 1BA22R Superior temporal gyrus 1 1 1 1 1 1BA23L Posterior cingulate cortex1 2 2 2 2 2 2BA23R Posterior cingulate cortex1 2 2 2 2 2 2BA24L Dorsal anterior cingulate cortex 4 3 3 3 3 3BA24R Dorsal anterior cingulate cortex 4 3 3 3 3 3BA25L Subgenual anterior cingulate cortex 5 5 5 3 NaN 5BA25R Subgenual anterior cingulate cortex 5 5 5 3 NaN 5BA27L Parahippocampal gyrus1 2 2 NaN 2 NaN NaNBA27R Parahippocampal gyrus1 NaN NaN NaN NaN NaN NaNBA28L Hippocampal area1 5 5 5 3 NaN 5BA28R Hippocampal area1 5 5 NaN 3 NaN 5BA29L Retrosplenial cortex1 2 2 2 2 2 2BA29R Retrosplenial cortex1 2 2 2 2 2 2BA30L Retrosplenial cortex2 2 2 2 2 2 2BA30R Retrosplenial cortex2 2 2 2 2 2 2BA31L Posterior cingulate cortex2 2 2 2 2 2 2BA31R Posterior cingulate cortex2 2 2 2 2 2 2BA32L Pregenual anterior cingulate cortex 4 3 3 3 3 3BA32R Pregenual anterior cingulate cortex 4 3 3 3 3 3BA33L Rostral anterior cingulate cortex NaN NaN NaN NaN NaN NaNBA33R Rostral anterior cingulate cortex NaN NaN NaN NaN NaN NaNBA34L Hippocampus NaN 5 NaN 3 NaN NaNBA34R Hippocampus 5 5 5 3 NaN 5BA35L Hippocampal area2 5 5 5 3 NaN 5BA35R Hippocampal area2 5 4 3 3 NaN 5BA36L Parahippocampal gyrus2 5 5 5 3 NaN 5BA36R Parahippocampal gyrus2 5 4 3 2 NaN 2BA37L Occipital-temporal cortex 2 2 2 2 2 2BA37R Occipital-temporal cortex 2 4 2 2 2 2BA38L Temporal pole 5 NaN 5 3 5 5BA38R Temporal pole 5 5 5 3 5 5BA39L Angular gyrus 2 2 2 2 2 2BA39R Angular gyrus 2 2 2 2 2 2BA40L Intra-parietal sulcus 4 4 4 4 2 4BA40R Intra-parietal sulcus 4 4 4 4 4 4BA41L Primary auditory cortex 1 1 1 1 1 1BA41R Primary auditory cortex 1 1 1 1 1 1BA42L Secondary auditory cortex 1 1 1 1 1 1BA42R Secondary auditory cortex 1 1 1 1 1 1BA43L Postcentral gyrus 1 1 4 1 1 1BA43R Postcentral gyrus 1 1 4 1 1 1 rodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa BA44L Opercular part of inferior frontal gyrus 1 1 1 3 1 1BA44R Opercular part of inferior frontal gyrus 1 1 1 3 NaN 1BA45L Inferior frontal gyrus 1 1 1 3 1 1BA45R Inferior frontal gyrus 1 1 1 3 NaN 1BA46L Medial prefrontal cortex 3 4 3 3 3 4BA46R Medial prefrontal cortex 1 4 3 3 3 NaNBA47L Ventro-lateral prefrontal cortex 4 5 5 3 5 5BA47R Ventro-lateral prefrontal cortex 4 5 5 3 5 5TABLE S3: Modules for High Modularity Group
Brodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa
BA01L Primary somatosensory cortex 4 4 4 4 4 4BA01R Primary somatosensory cortex 4 4 4 4 4 4BA02L Secondary somatosensory cortex 2 4 4 4 4 4BA02R Secondary somatosensory cortex 4 4 4 4 4 4BA03L Tertiary somatosensory cortex 2 4 4 2 4 4BA03R Tertiary somatosensory cortex 4 4 4 4 4 4BA04L Primary motor cortex 2 4 4 4 4 4BA04R Primary motor cortex 4 4 4 4 4 4BA05L Superior parietal sulcus 2 4 4 2 2 4BA05R Superior parietal sulcus 2 4 4 2 2 4BA06L Supplementary motor area 3 4 4 4 3 4BA06R Supplementary motor area 3 4 4 4 3 3BA07L Superior parietal gyrus 2 2 2 2 2 2BA07R Superior parietal gyrus 2 2 2 2 2 2BA08L Pre-supplementary motor area 3 3 3 3 3 3BA08R Pre-supplementary motor area 3 3 5 3 3 3BA09L Dorsolateral prefrontal cortex 3 3 3 3 3 3BA09R Dorsolateral prefrontal cortex 3 3 3 3 3 3BA10L Fronto-parietal cortex 3 3 3 3 3 5BA10R Fronto-parietal cortex 3 3 3 3 3 5BA11L Orbital frontal cortex NaN NaN NaN NaN NaN NaNBA11R Orbital frontal cortex NaN NaN NaN NaN NaN NaNBA13L Insula 1 NaN 1 1 1 1BA13R Insula 1 NaN 1 1 1 1BA17L Primary visual cortex 2 2 2 2 2 2BA17R Primary visual cortex 2 2 2 2 2 2BA18L Secondary visual cortex 2 2 2 2 2 2BA18R Secondary visual cortex 2 2 2 2 2 2BA19L Cuneus 2 2 2 2 2 2BA19R Cuneus 2 2 2 2 2 2BA20L Inferior temporal gyrus 5 NaN 5 5 NaN 1 rodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa BA20R Inferior temporal gyrus 5 NaN 5 5 NaN NaNBA21L Medial temporal gyrus 5 1 5 5 1 1BA21R Medial temporal gyrus 3 1 5 5 1 1BA22L Superior temporal gyrus 1 1 1 1 1 1BA22R Superior temporal gyrus 1 1 1 1 1 1BA23L Posterior cingulate cortex1 2 2 2 2 2 2BA23R Posterior cingulate cortex1 2 2 2 2 2 2BA24L Dorsal anterior cingulate cortex 3 3 3 4 3 3BA24R Dorsal anterior cingulate cortex 3 3 3 4 3 3BA25L Subgenual anterior cingulate cortex 5 NaN 5 5 NaN 5BA25R Subgenual anterior cingulate cortex 5 NaN 5 5 NaN 5BA27L Parahippocampal gyrus1 NaN NaN NaN NaN NaN NaNBA27R Parahippocampal gyrus1 NaN NaN NaN NaN NaN NaNBA28L Hippocampal area1 5 5 5 5 3 5BA28R Hippocampal area1 5 5 5 5 NaN NaNBA29L Retrosplenial cortex1 2 2 2 2 2 2BA29R Retrosplenial cortex1 2 2 2 2 2 2BA30L Retrosplenial cortex2 2 2 2 2 2 2BA30R Retrosplenial cortex2 2 2 2 2 2 2BA31L Posterior cingulate cortex2 2 2 2 2 2 2BA31R Posterior cingulate cortex2 2 2 2 2 2 2BA32L Pregenual anterior cingulate cortex 3 3 3 3 3 3BA32R Pregenual anterior cingulate cortex 3 3 3 3 3 3BA33L Rostral anterior cingulate cortex NaN NaN NaN NaN NaN NaNBA33R Rostral anterior cingulate cortex NaN NaN NaN NaN NaN NaNBA34L Hippocampus 5 5 NaN 5 NaN 5BA34R Hippocampus 5 5 5 5 NaN 5BA35L Hippocampal area2 5 5 5 5 NaN 5BA35R Hippocampal area2 5 NaN 5 5 NaN 5BA36L Parahippocampal gyrus2 5 5 5 5 3 5BA36R Parahippocampal gyrus2 5 5 5 5 2 5BA37L Occipital-temporal cortex 2 2 2 2 2 2BA37R Occipital-temporal cortex 2 2 2 2 2 2BA38L Temporal pole 5 5 5 5 1 5BA38R Temporal pole 5 5 5 5 3 5BA39L Angular gyrus 2 2 2 2 2 2BA39R Angular gyrus 2 2 2 2 2 2BA40L Intra-parietal sulcus 2 4 4 4 2 4BA40R Intra-parietal sulcus 2 4 4 4 2 4BA41L Primary auditory cortex 1 1 1 1 1 1BA41R Primary auditory cortex 1 1 1 1 1 1BA42L Secondary auditory cortex 1 1 1 1 1 1BA42R Secondary auditory cortex 1 1 1 1 1 1BA43L Postcentral gyrus 1 4 1 1 1 1BA43R Postcentral gyrus 1 1 1 1 1 1 rodmann Area Name Self Chaplin Gagaku Bach Cronkite Xhosa BA44L Opercular part of inferior frontal gyrus 1 NaN 1 3 1 NaNBA44R Opercular part of inferior frontal gyrus 1 NaN 1 3 1 NaNBA45L Inferior frontal gyrus 3 3 1 3 1 1BA45R Inferior frontal gyrus 3 NaN 1 3 NaN 1BA46L Medial prefrontal cortex 3 3 1 3 NaN 3BA46R Medial prefrontal cortex 3 3 NaN 3 NaN 3BA47L Ventro-lateral prefrontal cortex 3 5 5 3 3 5BA47R Ventro-lateral prefrontal cortex 3 5 5 3 3 5[1] C. Karmonik, A. Brandt, J. R. Anderson, F. Brooks, J. Lytle, E. Silverman, and J. T. Frazier,Brain Connectivity , 632 (2016).[2] C. Karmonik, A. Brandt, S. Elias, J. Townsend, E. Silverman, Z. Shi, and J. T. Frazier,International Journal of Computer Assisted Radiology and Surgery , 703 (2020).[3] R. W. Cox, Neuroimage , 743 (2012).[4] Q. Yue, R. C. Martin, S. Fischer-Baum, A. I. Ramos-Nu˜nez, F. Ye, and M. W. Deem, Journalof Cognitive Neuroscience , 1532 (2017)., 1532 (2017).