Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ari E. Kahn is active.

Publication


Featured researches published by Ari E. Kahn.


Nature Communications | 2015

Controllability of structural brain networks

Shi Gu; Fabio Pasqualetti; Matthew Cieslak; Qawi K. Telesford; Alfred B. Yu; Ari E. Kahn; John D. Medaglia; Jean M. Vettel; Michael B. Miller; Scott T. Grafton; Danielle S. Bassett

Cognitive function is driven by dynamic interactions between large-scale neural circuits or networks, enabling behaviour. However, fundamental principles constraining these dynamic network processes have remained elusive. Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure. Our results suggest that densely connected areas, particularly in the default mode system, facilitate the movement of the brain to many easily reachable states. Weakly connected areas, particularly in cognitive control systems, facilitate the movement of the brain to difficult-to-reach states. Areas located on the boundary between network communities, particularly in attentional control systems, facilitate the integration or segregation of diverse cognitive systems. Our results suggest that structural network differences between cognitive circuits dictate their distinct roles in controlling trajectories of brain network function.


Nature Communications | 2017

Developmental increases in white matter network controllability support a growing diversity of brain dynamics

Evelyn Tang; Chad Giusti; Graham L. Baum; Shi Gu; Eli Pollock; Ari E. Kahn; David R. Roalf; Tyler M. Moore; Kosha Ruparel; Ruben C. Gur; Raquel E. Gur; Theodore D. Satterthwaite; Danielle S. Bassett

Evelyn Tang, Chad Giusti, Graham Baum, Shi Gu, Ari E. Kahn, David Roalf, Tyler M. Moore, Kosha Ruparel, Ruben C. Gur, Raquel E. Gur, Theodore D. Satterthwaite, 3 and Danielle S. Bassett 4, 3 Department of Bioengineering, University of Pennsylvania, PA 19104 Brain Behavior Laboratory, Department of Psychiatry, University of Pennsylvania, PA 19104 These authors contributed equally. Department of Electrical and Systems Engineering, University of Pennsylvania, PA 19104 (Dated: May, 2016)As the human brain develops, it increasingly supports coordinated control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. Here we use a network representation of diffusion imaging data from 882 youth ages 8–22 to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamics in development. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Investigating structural mechanisms supporting these changes, we simulate network evolution with a set of growth rules. We find that all brain networks are structured in a manner highly optimized for network control, with distinct control mechanisms predicted in child vs. older youth. We demonstrate that our results cannot be explained by changes in network modularity. This work reveals a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture.Human brain development is characterized by an increased control of neural activity, but how this happens is not well understood. Here, authors show that white matter connectivity in 882 youth, aged 8-22, becomes increasingly specialized locally and is optimized for network control.


Journal of Computational Neuroscience | 2018

Cliques and cavities in the human connectome

Ann E. Sizemore; Chad Giusti; Ari E. Kahn; Jean M. Vettel; Richard F. Betzel; Danielle S. Bassett

Encoding brain regions and their connections as a network of nodes and edges captures many of the possible paths along which information can be transmitted as humans process and perform complex behaviors. Because cognitive processes involve large, distributed networks of brain areas, principled examinations of multi-node routes within larger connection patterns can offer fundamental insights into the complexities of brain function. Here, we investigate both densely connected groups of nodes that could perform local computations as well as larger patterns of interactions that would allow for parallel processing. Finding such structures necessitates that we move from considering exclusively pairwise interactions to capturing higher order relations, concepts naturally expressed in the language of algebraic topology. These tools can be used to study mesoscale network structures that arise from the arrangement of densely connected substructures called cliques in otherwise sparsely connected brain networks. We detect cliques (all-to-all connected sets of brain regions) in the average structural connectomes of 8 healthy adults scanned in triplicate and discover the presence of more large cliques than expected in null networks constructed via wiring minimization, providing architecture through which brain network can perform rapid, local processing. We then locate topological cavities of different dimensions, around which information may flow in either diverging or converging patterns. These cavities exist consistently across subjects, differ from those observed in null model networks, and – importantly – link regions of early and late evolutionary origin in long loops, underscoring their unique role in controlling brain function. These results offer a first demonstration that techniques from algebraic topology offer a novel perspective on structural connectomics, highlighting loop-like paths as crucial features in the human brain’s structural architecture.


Nature Physics | 2017

Role of graph architecture in controlling dynamical networks with applications to neural systems

Jason Z. Kim; Jonathan M. Soffer; Ari E. Kahn; Jean M. Vettel; Fabio Pasqualetti; Danielle S. Bassett

Networked systems display complex patterns of interactions between components. In physical networks, these interactions often occur along structural connections that link components in a hard-wired connection topology, supporting a variety of system-wide dynamical behaviors such as synchronization. While descriptions of these behaviors are important, they are only a first step towards understanding and harnessing the relationship between network topology and system behavior. Here, we use linear network control theory to derive accurate closed-form expressions that relate the connectivity of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems. To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from Drosophila, mouse, and human brains. We use these principles to suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs while remaining robust to perturbations, and to perform clinically accessible targeted manipulation of the brains control performance by removing single edges in the network. Generally, our results ground the expectation of a control systems behavior in its network architecture, and directly inspire new directions in network analysis and design via distributed control.


Cerebral Cortex | 2017

Structural Pathways Supporting Swift Acquisition of New Visuomotor Skills

Ari E. Kahn; Marcelo G. Mattar; Jean M. Vettel; Nicholas F. Wymbs; Scott T. Grafton; Danielle S. Bassett

Abstract Human skill learning requires fine‐scale coordination of distributed networks of brain regions linked by white matter tracts to allow for effective information transmission. Yet how individual differences in these anatomical pathways may impact individual differences in learning remains far from understood. Here, we test the hypothesis that individual differences in structural organization of networks supporting task performance predict individual differences in the rate at which humans learn a visuomotor skill. Over the course of 6 weeks, 20 healthy adult subjects practiced a discrete sequence production task, learning a sequence of finger movements based on discrete visual cues. We collected structural imaging data, and using deterministic tractography generated structural networks for each participant to identify streamlines connecting cortical and subcortical brain regions. We observed that increased white matter connectivity linking early visual regions was associated with a faster learning rate. Moreover, the strength of multiedge paths between motor and visual modules was also correlated with learning rate, supporting the potential role of extended sets of polysynaptic connections in successful skill acquisition. Our results demonstrate that estimates of anatomical connectivity from white matter microstructure can be used to predict future individual differences in the capacity to learn a new motor‐visual skill, and that these predictions are supported both by direct connectivity in visual cortex and indirect connectivity between visual cortex and motor cortex.


Scientific Reports | 2017

Process reveals structure: How a network is traversed mediates expectations about its architecture

Elisabeth A. Karuza; Ari E. Kahn; Sharon L. Thompson-Schill; Danielle S. Bassett

Network science has emerged as a powerful tool through which we can study the higher-order architectural properties of the world around us. How human learners exploit this information remains an essential question. Here, we focus on the temporal constraints that govern such a process. Participants viewed a continuous sequence of images generated by three distinct walks on a modular network. Walks varied along two critical dimensions: their predictability and the density with which they sampled from communities of images. Learners exposed to walks that richly sampled from each community exhibited a sharp increase in processing time upon entry into a new community. This effect was eliminated in a highly regular walk that sampled exhaustively from images in short, successive cycles (i.e., that increasingly minimized uncertainty about the nature of upcoming stimuli). These results demonstrate that temporal organization plays an essential role in learners’ sensitivity to the network architecture underlying sensory input.


Nature Human Behaviour | 2018

Network constraints on learnability of probabilistic motor sequences

Ari E. Kahn; Elisabeth A. Karuza; Jean M. Vettel; Danielle S. Bassett

Human learners are adept at grasping the complex relationships underlying incoming sequential input1. In the present work, we formalize complex relationships as graph structures2 derived from temporal associations3,4 in motor sequences. Next, we explore the extent to which learners are sensitive to key variations in the topological properties5 inherent to those graph structures. Participants performed a probabilistic motor sequence task in which the order of button presses was determined by the traversal of graphs with modular, lattice-like or random organization. Graph nodes each represented a unique button press, and edges represented a transition between button presses. The results indicate that learning, indexed here by participants’ response times, was strongly mediated by the graph’s mesoscale organization, with modular graphs being associated with shorter response times than random and lattice graphs. Moreover, variations in a node’s number of connections (degree) and a node’s role in mediating long-distance communication (betweenness centrality) impacted graph learning, even after accounting for the level of practice on that node. These results demonstrate that the graph architecture underlying temporal sequences of stimuli fundamentally constrains learning, and moreover that tools from network science provide a valuable framework for assessing how learners encode complex, temporally structured information.Kahn et al. show that learners capitalize on higher-order topological properties when they learn a probabilistic motor sequence based on a network traversal.


bioRxiv | 2018

Predictive control of electrophysiological network architecture using direct, single-node neurostimulation in humans

Ankit N. Khambhati; Ari E. Kahn; Julia Costantini; Youssef Ezzyat; Ethan A Solomon; Robert E. Gross; Barbara C. Jobst; Sameer A. Sheth; Kareem A. Zaghloul; Gregory A. Worrell; Sarah Seger; Bradley Lega; Shennan Weiss; Michael R. Sperling; Richard Gorniak; Sandhitsu R. Das; Joel Stein; Daniel S. Rizzuto; Michael J. Kahana; Timothy H. Lucas; Kathryn A. Davis; Joseph I. Tracy; Danielle S. Bassett

Chronically implantable neurostimulation devices are becoming a clinically viable option for treating patients with neurological disease and psychiatric disorders. Neurostimulation offers the ability to probe and manipulate distributed networks of interacting brain areas in dysfunctional circuits. Here, we use tools from network control theory to examine the dynamic reconfiguration of functionally interacting neuronal ensembles during targeted neurostimulation of cortical and subcortical brain structures. By integrating multi-modal intracranial recordings and diffusion tensor imaging from patients with drug-resistant epilepsy, we test hypothesized structural and functional rules that predict altered patterns of synchronized local field potentials. We demonstrate the ability to predictably reconfigure functional interactions depending on stimulation strength and location. Stimulation of areas with structurally weak connections largely modulates the functional hubness of downstream areas and concurrently propels the brain towards more difficult-to-reach dynamical states. By using focal perturbations to bridge large-scale structure, function, and markers of behavior, our findings suggest that stimulation may be tuned to influence different scales of network interactions driving cognition.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2018

Individual Differences in Learning Social and Non-Social Network Structures

Steven Tompson; Ari E. Kahn; Emily B. Falk; Jean M. Vettel; Danielle S. Bassett

How do people acquire knowledge about which individuals belong to different cliques or communities? And to what extent does this learning process differ from the process of learning higher-order information about complex associations between nonsocial bits of information? Here, the authors use a paradigm in which the order of stimulus presentation forms temporal associations between the stimuli, collectively constituting a complex network. They examined individual differences in the ability to learn community structure of networks composed of social versus nonsocial stimuli. Although participants were able to learn community structure of both social and nonsocial networks, their performance in social network learning was uncorrelated with their performance in nonsocial network learning. In addition, social traits, including social orientation and perspective-taking, uniquely predicted the learning of social community structure but not the learning of nonsocial community structure. Taken together, the results suggest that the process of learning higher-order community structure in social networks is partially distinct from the process of learning higher-order community structure in nonsocial networks. The study design provides a promising approach to identify neurophysiological drivers of social network versus nonsocial network learning, extending knowledge about the impact of individual differences on these learning processes.


bioRxiv | 2018

White Matter Network Architecture Guides Direct Electrical Stimulation Through Optimal State Transitions

Jennifer Stiso; Ankit N. Khambhati; Tommaso Menara; Ari E. Kahn; Joel Stein; Sandihitsu R Das; Richard Gorniak; Joseph I. Tracy; Brian Litt; Kathryn A. Davis; Fabio Pasqualetti; Timothy H. Lucas; Danielle S. Bassett

Electrical brain stimulation is currently being investigated as a potential therapy for neurological disease. However, opportunities to optimize and personalize such therapies are challenged by the fact that the beneficial impact (and potential side effects) of focal stimulation on both neighboring and distant regions is not well understood. Here, we use network control theory to build a formal model of brain network function that makes explicit predictions about how stimulation spreads through the brain’s white matter network and influences large-scale dynamics. We test these predictions using combined electrocorticography (ECoG) and diffusion weighted imaging (DWI) data from patients with medically refractory epilepsy undergoing evaluation for resective surgery, and who volunteered to participate in an extensive stimulation regimen. We posit a specific model-based manner in which white matter tracts constrain stimulation, defining its capacity to drive the brain to new states, including states associated with successful memory encoding. In a first validation of our model, we find that the true pattern of white matter tracts can be used to more accurately predict the state transitions induced by direct electrical stimulation than the artificial patterns of a topological or spatial network null model. We then use a targeted optimal control framework to solve for the optimal energy required to drive the brain to a given state. We show that, intuitively, our model predicts larger energy requirements when starting from states that are farther away from a target memory state. We then suggest testable hypotheses about which structural properties will lead to efficient stimulation for improving memory based on energy requirements. We show that the strength and homogeneity of edges between controlled and uncontrolled nodes, as well as the persistent modal controllability of the stimulated region, predict energy requirements. Our work demonstrates that individual white matter architecture plays a vital role in guiding the dynamics of direct electrical stimulation, more generally offering empirical support for the utility of network control theoretic models of brain response to stimulation.

Collaboration


Dive into the Ari E. Kahn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chad Giusti

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David R. Roalf

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Graham L. Baum

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Kosha Ruparel

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Raquel E. Gur

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ruben C. Gur

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Shi Gu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tyler M. Moore

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge