Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chad Giusti is active.

Publication


Featured researches published by Chad Giusti.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Clique topology reveals intrinsic geometric structure in neural correlations

Chad Giusti; Eva Pastalkova; Carina Curto; Vladimir Itskov

Significance Detecting structure in neural activity is critical for understanding the function of neural circuits. The coding properties of neurons are typically investigated by correlating their responses to external stimuli. It is not clear, however, if the structure of neural activity can be inferred intrinsically, without a priori knowledge of the relevant stimuli. We introduce a novel method, called clique topology, that detects intrinsic structure in neural activity that is invariant under nonlinear monotone transformations. Using pairwise correlations of neurons in the hippocampus, we demonstrate that our method is capable of detecting geometric structure from neural activity alone, without appealing to external stimuli or receptive fields. Detecting meaningful structure in neural activity and connectivity data is challenging in the presence of hidden nonlinearities, where traditional eigenvalue-based methods may be misleading. We introduce a novel approach to matrix analysis, called clique topology, that extracts features of the data invariant under nonlinear monotone transformations. These features can be used to detect both random and geometric structure, and depend only on the relative ordering of matrix entries. We then analyzed the activity of pyramidal neurons in rat hippocampus, recorded while the animal was exploring a 2D environment, and confirmed that our method is able to detect geometric organization using only the intrinsic pattern of neural correlations. Remarkably, we found similar results during nonspatial behaviors such as wheel running and rapid eye movement (REM) sleep. This suggests that the geometric structure of correlations is shaped by the underlying hippocampal circuits and is not merely a consequence of position coding. We propose that clique topology is a powerful new tool for matrix analysis in biological settings, where the relationship of observed quantities to more meaningful variables is often nonlinear and unknown.


Journal of Computational Neuroscience | 2016

Two's company, three (or more) is a simplex

Chad Giusti; Robert Ghrist; Danielle S. Bassett

The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad – two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.


Nature Communications | 2017

Developmental increases in white matter network controllability support a growing diversity of brain dynamics

Evelyn Tang; Chad Giusti; Graham L. Baum; Shi Gu; Eli Pollock; Ari E. Kahn; David R. Roalf; Tyler M. Moore; Kosha Ruparel; Ruben C. Gur; Raquel E. Gur; Theodore D. Satterthwaite; Danielle S. Bassett

Evelyn Tang, Chad Giusti, Graham Baum, Shi Gu, Ari E. Kahn, David Roalf, Tyler M. Moore, Kosha Ruparel, Ruben C. Gur, Raquel E. Gur, Theodore D. Satterthwaite, 3 and Danielle S. Bassett 4, 3 Department of Bioengineering, University of Pennsylvania, PA 19104 Brain Behavior Laboratory, Department of Psychiatry, University of Pennsylvania, PA 19104 These authors contributed equally. Department of Electrical and Systems Engineering, University of Pennsylvania, PA 19104 (Dated: May, 2016)As the human brain develops, it increasingly supports coordinated control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. Here we use a network representation of diffusion imaging data from 882 youth ages 8–22 to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamics in development. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Investigating structural mechanisms supporting these changes, we simulate network evolution with a set of growth rules. We find that all brain networks are structured in a manner highly optimized for network control, with distinct control mechanisms predicted in child vs. older youth. We demonstrate that our results cannot be explained by changes in network modularity. This work reveals a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture.Human brain development is characterized by an increased control of neural activity, but how this happens is not well understood. Here, authors show that white matter connectivity in 882 youth, aged 8-22, becomes increasingly specialized locally and is optimized for network control.


PLOS ONE | 2016

Choosing Wavelet Methods, Filters, and Lengths for Functional Brain Network Construction

Zitong Zhang; Qawi K. Telesford; Chad Giusti; Kelvin O. Lim; Danielle S. Bassett

Wavelet methods are widely used to decompose fMRI, EEG, or MEG signals into time series representing neurophysiological activity in fixed frequency bands. Using these time series, one can estimate frequency-band specific functional connectivity between sensors or regions of interest, and thereby construct functional brain networks that can be examined from a graph theoretic perspective. Despite their common use, however, practical guidelines for the choice of wavelet method, filter, and length have remained largely undelineated. Here, we explicitly explore the effects of wavelet method (MODWT vs. DWT), wavelet filter (Daubechies Extremal Phase, Daubechies Least Asymmetric, and Coiflet families), and wavelet length (2 to 24)—each essential parameters in wavelet-based methods—on the estimated values of graph metrics and in their sensitivity to alterations in psychiatric disease. We observe that the MODWT method produces less variable estimates than the DWT method. We also observe that the length of the wavelet filter chosen has a greater impact on the estimated values of graph metrics than the type of wavelet chosen. Furthermore, wavelet length impacts the sensitivity of the method to detect differences between health and disease and tunes classification accuracy. Collectively, our results suggest that the choice of wavelet method and length significantly alters the reliability and sensitivity of these methods in estimating values of metrics drawn from graph theory. They furthermore demonstrate the importance of reporting the choices utilized in neuroimaging studies and support the utility of exploring wavelet parameters to maximize classification accuracy in the development of biomarkers of psychiatric disease and neurological disorders.


Journal of Complex Networks | 2016

Classification of weighted networks through mesoscale homological features

Ann E. Sizemore; Chad Giusti; Danielle S. Bassett

As complex networks find applications in a growing range of disciplines, the diversity of naturally occurring and model networks being studied is exploding. The adoption of a well-developed collection of network taxonomies is a natural method for both organizing this data and understanding deeper relationships between networks. Most existing metrics for network structure rely on classical graph-theoretic measures, extracting characteristics primarily related to individual vertices or paths between them, and thus classify networks from the perspective of local features. Here, we describe an alternative approach to studying structure in networks that relies on an algebraic-topological metric called persistent homology, which studies intrinsically mesoscale structures called cycles, constructed from cliques in the network. We present a classification of 14 commonly studied weighted network models into four groups or classes, and discuss the structural themes arising in each class. Finally, we compute the persistent homology of two real-world networks and one network constructed by a common dynamical systems model, and we compare the results with the three classes to obtain a better understanding of those networks.


Journal of Computational Neuroscience | 2018

Cliques and cavities in the human connectome

Ann E. Sizemore; Chad Giusti; Ari E. Kahn; Jean M. Vettel; Richard F. Betzel; Danielle S. Bassett

Encoding brain regions and their connections as a network of nodes and edges captures many of the possible paths along which information can be transmitted as humans process and perform complex behaviors. Because cognitive processes involve large, distributed networks of brain areas, principled examinations of multi-node routes within larger connection patterns can offer fundamental insights into the complexities of brain function. Here, we investigate both densely connected groups of nodes that could perform local computations as well as larger patterns of interactions that would allow for parallel processing. Finding such structures necessitates that we move from considering exclusively pairwise interactions to capturing higher order relations, concepts naturally expressed in the language of algebraic topology. These tools can be used to study mesoscale network structures that arise from the arrangement of densely connected substructures called cliques in otherwise sparsely connected brain networks. We detect cliques (all-to-all connected sets of brain regions) in the average structural connectomes of 8 healthy adults scanned in triplicate and discover the presence of more large cliques than expected in null networks constructed via wiring minimization, providing architecture through which brain network can perform rapid, local processing. We then locate topological cavities of different dimensions, around which information may flow in either diverging or converging patterns. These cavities exist consistently across subjects, differ from those observed in null model networks, and – importantly – link regions of early and late evolutionary origin in long loops, underscoring their unique role in controlling brain function. These results offer a first demonstration that techniques from algebraic topology offer a novel perspective on structural connectomics, highlighting loop-like paths as crucial features in the human brain’s structural architecture.


Neural Computation | 2014

A no-go theorem for one-layer feedforward networks

Chad Giusti; Vladimir Itskov

It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or many-layer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form , where is a polyhedron.


Journal of Topology | 2012

The mod-2 cohomology rings of symmetric groups

Chad Giusti; Paolo Salvatore; Dev Sinha

On the cohomology of BS• the second product · is cup product, which is zero for classes supported on disjoint components. The first product ⊙ is the relatively new transfer product first studied by Strickland and Turner [21], (see Definition 3.1). It is akin to the “induction product” in the representation theory of symmetric groups, which dates back to Young and has been in standard use [9, 22]. The coproduct ∆ on cohomology is dual to the standard Pontrjagin product on the homology of BS•. This Hopf ring structure was used by Strickland in [20] to calculate the Morava E-theory of symmetric groups. Though Hopf rings were introduced by Milgram to study the homology of the sphere spectrum [11] and thus of symmetric groups [4], the Hopf ring structure we study does not fit into the standard framework. In particular it exists in cohomology rather than homology. See [21] for a lucid, complete discussion of the relationships between all of these structures. But like in calculations such as that of Ravenel and Wilson [17], we find this Hopf ring presentation to be quite efficient, given by a simple list of generators and relations.


Physical Review E | 2016

Topological and geometric measurements of force-chain structure

Chad Giusti; Lia Papadopoulos; Eli T. Owens; Karen E. Daniels; Danielle S. Bassett

Developing quantitative methods for characterizing structural properties of force chains in densely packed granular media is an important step toward understanding or predicting large-scale physical properties of a packing. A promising framework in which to develop such methods is network science, which can be used to translate particle locations and force contacts into a graph in which particles are represented by nodes and forces between particles are represented by weighted edges. Recent work applying network-based community-detection techniques to extract force chains opens the door to developing statistics of force-chain structure, with the goal of identifying geometric and topological differences across packings, and providing a foundation on which to build predictions of bulk material properties from mesoscale network features. Here we discuss a trio of related but fundamentally distinct measurements of the mesoscale structure of force chains in two-dimensional (2D) packings, including a statistic derived using tools from algebraic topology, which together provide a tool set for the analysis of force chain architecture. We demonstrate the utility of this tool set by detecting variations in force-chain architecture with pressure. Collectively, these techniques can be generalized to 3D packings, and to the assessment of continuous deformations of packings under stress or strain.


arXiv: Algebraic Topology | 2012

Fox-Neuwirth cell structures and the cohomology of symmetric groups

Chad Giusti; Dev Sinha

We use the Fox-Neuwirth cell structure for one-point compactifications of configuration spaces as the starting point for understanding our recent calculation of the mod-two cohomology of symmetric groups. We then use that calculation to give short proofs of classical results on this cohomology due to Nakaoka and to Madsen.

Collaboration


Dive into the Chad Giusti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evelyn Tang

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ari E. Kahn

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Vladimir Itskov

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Ann E. Sizemore

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David R. Roalf

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Graham L. Baum

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Kosha Ruparel

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Raquel E. Gur

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ruben C. Gur

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge