Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John M. Beggs is active.

Publication


Featured researches published by John M. Beggs.


The Journal of Neuroscience | 2003

Neuronal Avalanches in Neocortical Circuits

John M. Beggs; Dietmar Plenz

Networks of living neurons exhibit diverse patterns of activity, including oscillations, synchrony, and waves. Recent work in physics has shown yet another mode of activity in systems composed of many nonlinear units interacting locally. For example, avalanches, earthquakes, and forest fires all propagate in systems organized into a critical state in which event sizes show no characteristic scale and are described by power laws. We hypothesized that a similar mode of activity with complex emergent properties could exist in networks of cortical neurons. We investigated this issue in mature organotypic cultures and acute slices of rat cortex by recording spontaneous local field potentials continuously using a 60 channel multielectrode array. Here, we show that propagation of spontaneous activity in cortical networks is described by equations that govern avalanches. As predicted by theory for a critical branching process, the propagation obeys a power law with an exponent of -3/2 for event sizes, with a branching parameter close to the critical value of 1. Simulations show that a branching parameter at this value optimizes information transmission in feedforward networks, while preventing runaway network excitation. Our findings suggest that “neuronal avalanches” may be a generic property of cortical networks, and represent a mode of activity that differs profoundly from oscillatory, synchronized, or wave-like network states. In the critical state, the network may satisfy the competing demands of information transmission and network stability.


The Journal of Neuroscience | 2004

Neuronal Avalanches Are Diverse and Precise Activity Patterns That Are Stable for Many Hours in Cortical Slice Cultures

John M. Beggs; Dietmar Plenz

A major goal of neuroscience is to elucidate mechanisms of cortical information processing and storage. Previous work from our laboratory (Beggs and Plenz, 2003) revealed that propagation of local field potentials (LFPs) in cortical circuits could be described by the same equations that govern avalanches. Whereas modeling studies suggested that these “neuronal avalanches” were optimal for information transmission, it was not clear what role they could play in information storage. Work from numerous other laboratories has shown that cortical structures can generate reproducible spatiotemporal patterns of activity that could be used as a substrate for memory. Here, we show that although neuronal avalanches lasted only a few milliseconds, their spatiotemporal patterns were also stable and significantly repeatable even many hours later. To investigate these issues, we cultured coronal slices of rat cortex for 4 weeks on 60-channel microelectrode arrays and recorded spontaneous extracellular LFPs continuously for 10 hr. Using correlation-based clustering and a global contrast function, we found that each cortical culture spontaneously produced 4736 ± 2769 (mean ± SD) neuronal avalanches per hour that clustered into 30 ± 14 statistically significant families of spatiotemporal patterns. In 10 hr of recording, over 98% of the mutual information shared by these avalanche patterns were retained. Additionally, jittering analysis revealed that the correlations between avalanches were temporally precise to within ±4 msec. The long-term stability, diversity, and temporal precision of these avalanches indicate that they fulfill many of the requirements expected of a substrate for memory and suggest that they play a central role in both information transmission and storage within cortical networks.


The Journal of Neuroscience | 2008

A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro.

Aonan Tang; David Jackson; Jon Hobbs; Wei Chen; Jodi L. Smith; Hema Patel; Anita Prieto; Dumitru Petrusca; Matthew I. Grivich; Alexander Sher; Pawel Hottowy; W. Dabrowski; Alan Litke; John M. Beggs

Multineuron firing patterns are often observed, yet are predicted to be rare by models that assume independent firing. To explain these correlated network states, two groups recently applied a second-order maximum entropy model that used only observed firing rates and pairwise interactions as parameters (Schneidman et al., 2006; Shlens et al., 2006). Interestingly, with these minimal assumptions they predicted 90–99% of network correlations. If generally applicable, this approach could vastly simplify analyses of complex networks. However, this initial work was done largely on retinal tissue, and its applicability to cortical circuits is mostly unknown. This work also did not address the temporal evolution of correlated states. To investigate these issues, we applied the model to multielectrode data containing spontaneous spikes or local field potentials from cortical slices and cultures. The model worked slightly less well in cortex than in retina, accounting for 88 ± 7% (mean ± SD) of network correlations. In addition, in 8 of 13 preparations, the observed sequences of correlated states were significantly longer than predicted by concatenating states from the model. This suggested that temporal dependencies are a common feature of cortical network activity, and should be considered in future models. We found a significant relationship between strong pairwise temporal correlations and observed sequence length, suggesting that pairwise temporal correlations may allow the model to be extended into the temporal domain. We conclude that although a second-order maximum entropy model successfully predicts correlated states in cortical networks, it should be extended to account for temporal correlations observed between states.


Philosophical Transactions of the Royal Society A | 2008

The criticality hypothesis: how local cortical networks might optimize information processing.

John M. Beggs

Early theoretical and simulation work independently undertaken by Packard, Langton and Kauffman suggested that adaptability and computational power would be optimized in systems at the ‘edge of chaos’, at a critical point in a phase transition between total randomness and boring order. This provocative hypothesis has received much attention, but biological experiments supporting it have been relatively few. Here, we review recent experiments on networks of cortical neurons, showing that they appear to be operating near the critical point. Simulation studies capture the main features of these data and suggest that criticality may allow cortical networks to optimize information processing. These simulations lead to predictions that could be tested in the near future, possibly providing further experimental evidence for the criticality hypothesis.


Frontiers in Physiology | 2012

Being Critical of Criticality in the Brain

John M. Beggs; Nicholas Timme

Relatively recent work has reported that networks of neurons can produce avalanches of activity whose sizes follow a power law distribution. This suggests that these networks may be operating near a critical point, poised between a phase where activity rapidly dies out and a phase where activity is amplified over time. The hypothesis that the electrical activity of neural networks in the brain is critical is potentially important, as many simulations suggest that information processing functions would be optimized at the critical point. This hypothesis, however, is still controversial. Here we will explain the concept of criticality and review the substantial objections to the criticality hypothesis raised by skeptics. Points and counter points are presented in dialog form.


PLOS ONE | 2011

Extending transfer entropy improves identification of effective connectivity in a spiking cortical network model.

Shinya Ito; Michael E. Hansen; Randy Heiland; Andrew Lumsdaine; Alan Litke; John M. Beggs

Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons.


Journal of Computational Neuroscience | 2014

Synergy, redundancy, and multivariate information measures: an experimentalist's perspective

Nicholas Timme; Wesley T. Alford; Benjamin Flecker; John M. Beggs

Information theory has long been used to quantify interactions between two variables. With the rise of complex systems research, multivariate information measures have been increasingly used to investigate interactions between groups of three or more variables, often with an emphasis on so called synergistic and redundant interactions. While bivariate information measures are commonly agreed upon, the multivariate information measures in use today have been developed by many different groups, and differ in subtle, yet significant ways. Here, we will review these multivariate information measures with special emphasis paid to their relationship to synergy and redundancy, as well as examine the differences between these measures by applying them to several simple model systems. In addition to these systems, we will illustrate the usefulness of the information measures by analyzing neural spiking data from a dissociated culture through early stages of its development. Our aim is that this work will aid other researchers as they seek the best multivariate information measure for their specific research goals and system. Finally, we have made software available online which allows the user to calculate all of the information measures discussedwithin this paper.


Cerebral Cortex | 2015

Functional Clusters, Hubs, and Communities in the Cortical Microconnectome

Masanori Shimono; John M. Beggs

Although relationships between networks of different scales have been observed in macroscopic brain studies, relationships between structures of different scales in networks of neurons are unknown. To address this, we recorded from up to 500 neurons simultaneously from slice cultures of rodent somatosensory cortex. We then measured directed effective networks with transfer entropy, previously validated in simulated cortical networks. These effective networks enabled us to evaluate distinctive nonrandom structures of connectivity at 2 different scales. We have 4 main findings. First, at the scale of 3–6 neurons (clusters), we found that high numbers of connections occurred significantly more often than expected by chance. Second, the distribution of the number of connections per neuron (degree distribution) had a long tail, indicating that the network contained distinctively high-degree neurons, or hubs. Third, at the scale of tens to hundreds of neurons, we typically found 2–3 significantly large communities. Finally, we demonstrated that communities were relatively more robust than clusters against shuffling of connections. We conclude the microconnectome of the cortex has specific organization at different scales, as revealed by differences in robustness. We suggest that this information will help us to understand how the microconnectome is robust against damage.


The Journal of Neuroscience | 2016

Rich-Club Organization in Effective Connectivity among Cortical Neurons

Sunny Nigam; Masanori Shimono; Shinya Ito; Fang-Chin Yeh; Nicholas Timme; Maxym Myroshnychenko; Christopher C. Lapish; Zachary Tosi; Pawel Hottowy; Wesley C. Smith; Sotiris C. Masmanidis; Alan M. Litke; Olaf Sporns; John M. Beggs

The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases.


Neurocomputing | 2006

Neuronal avalanches and criticality: A dynamical model for homeostasis

David A. Hsu; John M. Beggs

The dynamics of microelectrode local field potentials from cortical slice cultures shows critical behavior. A desirable feature of criticality is that information transmission is optimal in this state. We explore a biologically plausible neural net model that can dynamically converge on criticality and that can return to criticality if perturbed away from it. Our model assumes the presence of a preferred target firing rate, with dynamical adjustments of internodal connection strengths to approach this firing rate. We suggest that mechanisms for maintaining firing rate homeostasis may also maintain a neural system at criticality.

Collaboration


Dive into the John M. Beggs's collaboration.

Top Co-Authors

Avatar

Alan Litke

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masanori Shimono

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas Timme

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pawel Hottowy

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dietmar Plenz

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Olaf Sporns

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Sunny Nigam

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge