Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Buice is active.

Publication


Featured researches published by Michael Buice.


PLOS Computational Biology | 2013

The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics

Ramakrishnan Iyer; Vilas Menon; Michael Buice; Christof Koch; Stefan Mihalas

The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Inferring cortical function in the mouse visual system through large-scale systems neuroscience

Michael Hawrylycz; Costas A. Anastassiou; Anton Arkhipov; Jim Berg; Michael Buice; Nicholas Cain; Nathan W. Gouwens; Sergey L. Gratiy; Ramakrishnan Iyer; Jung Hoon Lee; Stefan Mihalas; Catalin Mitelut; Shawn Olsen; R. Clay Reid; Corinne Teeter; Saskia de Vries; Jack Waters; Hongkui Zeng; Christof Koch; MindScope

The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.


Cell | 2015

A Biological Imitation Game.

Christof Koch; Michael Buice

The digital reconstruction of a slice of rat somatosensory cortex from the Blue Brain Project provides the most complete simulation of a piece of excitable brain matter to date. To place these efforts in context and highlight their strengths and limitations, we introduce a Biological Imitation Game, based on Alan Turings Imitation Game, that operationalizes the difference between real and simulated brains.


bioRxiv | 2018

A large-scale, standardized physiological survey reveals higher order coding throughout the mouse visual cortex

Saskia de Vries; Jerome Lecoq; Michael Buice; Peter Groblewski; Gabriel Koch Ocker; Michael Oliver; David Feng; Nicholas Cain; Peter Ledochowitsch; Daniel Millman; Kate Roll; Marina Garrett; Tom Keenan; Leonard Kuan; Stefan Mihalas; Shawn Olsen; Carol L. Thompson; Wayne Wakeman; Jack Waters; Derric Williams; Chris Barber; Nathan Berbesque; Brandon Blanchard; Nicholas Bowles; Shiella Caldejon; Linzy Casal; Andrew Cho; Sissy Cross; Chinh Dang; Tim Dolbeare

To understand how the brain processes sensory information to guide behavior, we must know how stimulus representations are transformed throughout the visual cortex. Here we report an open, large-scale physiological survey of neural activity in the awake mouse visual cortex: the Allen Brain Observatory Visual Coding dataset. This publicly available dataset includes cortical activity from nearly 60,000 neurons collected from 6 visual areas, 4 layers, and 12 transgenic mouse lines from 221 adult mice, in response to a systematic set of visual stimuli. Using this dataset, we reveal functional differences across these dimensions and show that visual cortical responses are sparse but correlated. Surprisingly, responses to different stimuli are largely independent, e.g. whether a neuron responds to natural scenes provides no information about whether it responds to natural movies or to gratings. We show that these phenomena cannot be explained by standard local filter-based models, but are consistent with multi-layer hierarchical computation, as found in deeper layers of standard convolutional neural networks.


eLife | 2018

Mouse color and wavelength-specific luminance contrast sensitivity are non-uniform across visual space

Daniel J. Denman; Jennifer Luviano; Douglas R. Ollerenshaw; Sissy Cross; Derric Williams; Michael Buice; Shawn Olsen; R. Clay Reid

Mammalian visual behaviors, as well as responses in the neural systems underlying these behaviors, are driven by luminance and color contrast. With constantly improving tools for measuring activity in cell-type-specific populations in the mouse during visual behavior, it is important to define the extent of luminance and color information that is behaviorally accessible to the mouse. A non-uniform distribution of cone opsins in the mouse retina potentially complicates both luminance and color sensitivity; opposing gradients of short (UV-shifted) and middle (blue/green) cone opsins suggest that color discrimination and wavelength-specific luminance contrast sensitivity may differ with retinotopic location. Here we ask how well mice can discriminate color and wavelength-specific luminance changes across visuotopic space. We found that mice were able to discriminate color and were able to do so more broadly across visuotopic space than expected from the cone-opsin distribution. We also found wavelength-band-specific differences in luminance sensitivity.


bioRxiv | 2018

Visual physiology of the Layer 4 cortical circuit in silico

Anton Arkhipov; Nathan W. Gouwens; Yazan N. Billeh; Sergey L. Gratiy; Ramakrishnan Iyer; Ziqiang Wei; Zihao Xu; Jim Berg; Michael Buice; Nicholas Cain; Nuno Maçarico da Costa; Saskia de Vries; Daniel J. Denman; Severine Durand; David Feng; Tim Jarsky; Jerome Lecoq; Brian R. Lee; Lu Li; Stefan Mihalas; Gabriel Koch Ocker; Shawn Olsen; R. Clay Reid; Gilberto Soler-Llavina; Staci A. Sorensen; Quanxin Wang; Jack Waters; Massimo Scanziani; Christof Koch

Despite advances in experimental techniques and accumulation of large datasets concerning the composition and properties of the cortex, quantitative modeling of cortical circuits under in-vivo-like conditions remains challenging. Here we report and publicly release a biophysically detailed circuit model of layer 4 in the mouse primary visual cortex, receiving thalamo-cortical visual inputs. The 45,000-neuron model was subjected to a battery of visual stimuli, and results were compared to published work and new in vivo experiments. Simulations reproduced a variety of observations, including effects of optogenetic perturbations. Critical to the agreement between responses in silico and in vivo were the rules of functional synaptic connectivity between neurons. Interestingly, after extreme simplification the model still performed satisfactorily on many measurements, although quantitative agreement with experiments suffered. These results emphasize the importance of functional rules of cortical wiring and enable a next generation of data-driven models of in vivo neural activity and computations. AUTHOR SUMMARY How can we capture the incredible complexity of brain circuits in quantitative models, and what can such models teach us about mechanisms underlying brain activity? To answer these questions, we set out to build extensive, bio-realistic models of brain circuitry employing systematic datasets on brain structure and function. Here we report the first modeling results of this project, focusing on the layer 4 of the primary visual cortex (V1) of the mouse. Our simulations reproduced a variety of experimental observations in a large battery of visual stimuli. The results elucidated circuit mechanisms determining patters of neuronal activity in layer 4 – in particular, the roles of feedforward thalamic inputs and specific patterns of intracortical connectivity in producing tuning of neuronal responses to the orientation of motion. Simplification of neuronal models led to specific deficiencies in reproducing experimental data, giving insights into how biological details contribute to various aspects of brain activity. To enable future development of more sophisticated models, we make the software code, the model, and simulation results publicly available.


bioRxiv | 2018

Dimensionality in recurrent spiking networks: global trends in activity and local origins in connectivity

stefano recanatesi; Gabriel Koch Ocker; Michael Buice; Eric Shea-Brown

The dimensionality of a network’s collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus. Author summary New recording technologies are producing an amazing explosion of data on neural activity. These data reveal the simultaneous activity of hundreds or even thousands of neurons. In principle, the activity of these neurons could explore a vast space of possible patterns. This is what is meant by high-dimensional activity: the number of degrees of freedom (or “modes”) of multineuron activity is large, perhaps as large as the number of neurons themselves. In practice, estimates of dimensionality differ strongly from case to case, and do so in interesting ways across experiments, species, and brain areas. The outcome is important for much more than just accurately describing neural activity: findings of low dimension have been proposed to allow data compression, denoising, and easily readable neural codes, while findings of high dimension have been proposed as signatures of powerful and general computations. So what is it about a neural circuit that leads to one case or the other? Here, we derive a set of principles that inform how the connectivity of a spiking neural network determines the dimensionality of the activity that it produces. These show that, in some cases, highly localized features of connectivity have strong control over a network’s global dimensionality—an interesting finding in the context of, e.g., learning rules that occur locally. We also show how dimension can be much different than first meets the eye with typical “pairwise” measurements, and how stimuli and intrinsic connectivity interact in shaping the overall dimension of a network’s response.


bioRxiv | 2017

Effective synaptic interactions in subsampled nonlinear networks with strong coupling

Braden A. W. Brinkman; Fred Rieke; Eric Shea-Brown; Michael Buice

A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a prominent example, we derive a formula for strong impact of hidden units in random networks with connection weights that scale with 1/√N, where N is the network size—precisely the scaling observed in recent experiments.A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r ′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r ′ to r plus corrections from every directed path from r ′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/√ N , where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions.


BMC Neuroscience | 2013

The influence of network structure on neuronal dynamics.

Patrick Campbell; Duane Q. Nykamp; Michael Buice

Understanding the influence of network structure on neural dynamics is a fundamental step toward deciphering brain function, yet presents many challenges. We show how networks may be described in terms of the occurrences of certain patterns of edges, and how the frequency of these motifs (see Figure ​Figure1)1) impacts global dynamics. Through analysis and simulation of neuronal networks, we have found that two edge directed paths (two-chains) have the most dramatic effect on dynamics. Our analytic results are based on equations for mean population activity and correlations that we derive using path integrals and moment hierarchy expansions. These equations reveal the primary ways in which the network motifs influence dynamics. For example, the equations indicate that the propensity of a network to globally synchronize increases with the prevalence of two-chains, and we verify this result with network simulations. Finally, we present ongoing work investigating when these second-order equations break down, and how they might be corrected by higher order approximations to the network structure, such as the prevalence of three edge chains beyond that predicted by the two-chains. Figure 1 The four second order edge motifs of reciprocal, convergent, divergent, and causal connections. The frequency of these motifs determine the second order statistics, or correlations, among the network connections.

Collaboration


Dive into the Michael Buice's collaboration.

Top Co-Authors

Avatar

Christof Koch

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Shawn Olsen

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Stefan Mihalas

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Gabriel Koch Ocker

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Jack Waters

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Nicholas Cain

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

R. Clay Reid

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Ramakrishnan Iyer

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Saskia de Vries

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge