Sacha J. van Albada
Allen Institute for Brain Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sacha J. van Albada.
Frontiers in Computational Neuroscience | 2013
Cliff C. Kerr; Sacha J. van Albada; Samuel A. Neymotin; George L. Chadderdon; P. A. Robinson; William W. Lytton
The basal ganglia play a crucial role in the execution of movements, as demonstrated by the severe motor deficits that accompany Parkinsons disease (PD). Since motor commands originate in the cortex, an important question is how the basal ganglia influence cortical information flow, and how this influence becomes pathological in PD. To explore this, we developed a composite neuronal network/neural field model. The network model consisted of 4950 spiking neurons, divided into 15 excitatory and inhibitory cell populations in the thalamus and cortex. The field model consisted of the cortex, thalamus, striatum, subthalamic nucleus, and globus pallidus. Both models have been separately validated in previous work. Three field models were used: one with basal ganglia parameters based on data from healthy individuals, one based on data from individuals with PD, and one purely thalamocortical model. Spikes generated by these field models were then used to drive the network model. Compared to the network driven by the healthy model, the PD-driven network had lower firing rates, a shift in spectral power toward lower frequencies, and higher probability of bursting; each of these findings is consistent with empirical data on PD. In the healthy model, we found strong Granger causality between cortical layers in the beta and low gamma frequency bands, but this causality was largely absent in the PD model. In particular, the reduction in Granger causality from the main “input” layer of the cortex (layer 4) to the main “output” layer (layer 5) was pronounced. This may account for symptoms of PD that seem to reflect deficits in information flow, such as bradykinesia. In general, these results demonstrate that the brains large-scale oscillatory environment, represented here by the field model, strongly influences the information processing that occurs within its subnetworks. Hence, it may be preferable to drive spiking network models with physiologically realistic inputs rather than pure white noise.
2013 IEEE Symposium on Biological Data Visualization (BioVis) | 2013
Christian Nowke; Maximilian Schmidt; Sacha J. van Albada; Jochen Martin Eppler; Rembrandt Bakker; Markus Diesrnann; Bernd Hentschel; Torsten W. Kuhlen
The aim of computational neuroscience is to gain insight into the dynamics and functionality of the nervous system by means of modeling and simulation. Current research leverages the power of High Performance Computing facilities to enable multi-scale simulations capturing both low-level neural activity and large-scalce interactions between brain regions. In this paper, we describe an interactive analysis tool that enables neuroscientists to explore data from such simulations. One of the driving challenges behind this work is the integration of macroscopic data at the level of brain regions with microscopic simulation results, such as the activity of individual neurons. While researchers validate their findings mainly by visualizing these data in a non-interactive fashion, state-of-the-art visualizations, tailored to the scientific question yet sufficiently general to accommodate different types of models, enable such analyses to be performed more efficiently. This work describes several visualization designs, conceived in close collaboration with domain experts, for the analysis of network models. We primarily focus on the exploration of neural activity data, inspecting connectivity of brain regions and populations, and visualizing activity flux across regions. We demonstrate the effectiveness of our approach in a case study conducted with domain experts.
Journal of Integrative Neuroscience | 2007
Sacha J. van Albada; Christopher J. Rennie; P. A. Robinson
Variable contributions of state and trait to the electroencephalographic (EEG) signal affect the stability over time of EEG measures, quite apart from other experimental uncertainties. The extent of intraindividual and interindividual variability is an important factor in determining the statistical, and hence possibly clinical significance of observed differences in the EEG. This study investigates the changes in classical quantitative EEG (qEEG) measures, as well as of parameters obtained by fitting frequency spectra to an existing continuum model of brain electrical activity. These parameters may have extra variability due to model selection and fitting. Besides estimating the levels of intraindividual and interindividual variability, we determined approximate time scales for change in qEEG measures and model parameters. This provides an estimate of the recording length needed to capture a given percentage of the total intraindividual variability. Also, if more precise time scales can be obtained in future, these may aid the characterization of physiological processes underlying various EEG measures. Heterogeneity of the subject group was constrained by testing only healthy males in a narrow age range (mean = 22.3 years, sd = 2.7). Eyes-closed EEGs of 32 subjects were recorded at weekly intervals over an approximately six-week period, of which 13 subjects were followed for a year. QEEG measures, computed from Cz spectra, were powers in five frequency bands, alpha peak frequency, and spectral entropy. Of these, theta, alpha, and beta band powers were most reproducible. Of the nine model parameters obtained by fitting model predictions to experiment, the most reproducible ones quantified the total power and the time delay between cortex and thalamus. About 95% of the maximum change in spectral parameters was reached within minutes of recording time, implying that repeat recordings are not necessary to capture the bulk of the variability in EEG spectra.
PLOS Computational Biology | 2015
Sacha J. van Albada; Moritz Helias; Markus Diesmann
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.
Cerebral Cortex | 2016
Espen Hagen; David Dahmen; Maria L. Stavrinou; Henrik Lindén; Tom Tetzlaff; Sacha J. van Albada; Sonja Grün; Markus Diesmann; Gaute T. Einevoll
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.
Network: Computation In Neural Systems | 2012
Sharon M. Crook; James A. Bednar; Sandra D Berger; Robert C. Cannon; Andrew P. Davison; Mikael Djurfeldt; Jochen Martin Eppler; Birgit Kriener; Steve B. Furber; Bruce P. Graham; Hans E. Plesser; Lars Schwabe; Leslie S. Smith; Volker Steuber; Sacha J. van Albada
As computational neuroscience matures, many simulation environments are available that are useful for neuronal network modeling. However, methods for successfully documenting models for publication and for exchanging models and model components among these projects are still under development. Here we briefly review existing software and applications for network model creation, documentation and exchange. Then we discuss a few of the larger issues facing the field of computational neuroscience regarding network modeling and suggest solutions to some of these problems, concentrating in particular on standardized network model terminology, notation, and descriptions and explicit documentation of model scaling. We hope this will enable and encourage computational neuroscientists to share their models more systematically in the future.
Clinical Neurophysiology | 2010
Cliff C. Kerr; Sacha J. van Albada; Christopher J. Rennie; P. A. Robinson
OBJECTIVE This study examines developmental and aging trends in auditory evoked potentials (AEPs) by applying two analysis methods to a large database of healthy subjects. METHODS AEPs and reaction times were recorded from 1498 healthy subjects aged 6-86 years using an auditory oddball paradigm. AEPs were analyzed using a recently published deconvolution method and conventional component scoring. Age trends in the resultant data were determined using smooth median-based fits. RESULTS Component latencies generally decreased during development and increased during aging. Deconvolution showed the emergence of a new feature during development, corresponding to improved differentiation between standard and target tones. The latency of this feature provides similar information as the target component latencies, while its amplitude provides a marker of cognitive development. CONCLUSIONS Age trends in component scores can be related to physiological changes in the brain. However, component scores show a high degree of redundancy, which limits their information content, and are often invalid when applied to young children. Deconvolution provides additional information on development not available through other methods. SIGNIFICANCE This is the largest study of AEP age trends to date. It provides comprehensive statistics on conventional component scores and shows that deconvolution is a simple and informative alternative.
PLOS Computational Biology | 2017
Jannis Schuecker; Maximilian Schmidt; Sacha J. van Albada; Markus Diesmann; Moritz Helias
The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.
bioRxiv | 2018
Padraig Gleeson; Matteo Cantarelli; Boris Marin; Adrian Quintana; Matt Earnshaw; Eugenio Piasini; Justas Birgiolas; Robert C. Cannon; N. Alex Cayco-Gajic; Sharon M. Crook; Andrew P. Davison; Salvador Dura-Bernal; Andras Ecker; Michael L. Hines; Giovanni Idili; Stephen D. Larson; William W. Lytton; Amit Majumdar; Robert A. McDougal; Subhashini Sivagnanam; Sergio Solinas; Rokas Stanislovas; Sacha J. van Albada; Werner Van Geit; R. Angus Silver
Computational models are powerful tools for investigating brain function in health and disease. However, biologically detailed neuronal and circuit models are complex and implemented in a range of specialized languages, making them inaccessible and opaque to many neuroscientists. This has limited critical evaluation of models by the scientific community and impeded their refinement and widespread adoption. To address this, we have combined advances in standardizing models, open source software development and web technologies to develop Open Source Brain, a platform for visualizing, simulating, disseminating and collaboratively developing standardized models of neurons and circuits from a range of brain regions. Model structure and parameters can be visualized and their dynamical properties explored through browser-controlled simulations, without writing code. Open Source Brain makes neural models transparent and accessible and facilitates testing, critical evaluation and refinement, thereby helping to improve the accuracy and reproducibility of models, and their dissemination to the wider community.
Brain Structure & Function | 2017
Maximilian Schmidt; Rembrandt Bakker; Claus C. Hilgetag; Markus Diesmann; Sacha J. van Albada
Cortical network structure has been extensively characterized at the level of local circuits and in terms of long-range connectivity, but seldom in a manner that integrates both of these scales. Furthermore, while the connectivity of cortex is known to be related to its architecture, this knowledge has not been used to derive a comprehensive cortical connectivity map. In this study, we integrate data on cortical architecture and axonal tracing data into a consistent multi-scale framework of the structure of one hemisphere of macaque vision-related cortex. The connectivity model predicts the connection probability between any two neurons based on their types and locations within areas and layers. Our analysis reveals regularities of cortical structure. We confirm that cortical thickness decays with cell density. A gradual reduction in neuron density together with the relative constancy of the volume density of synapses across cortical areas yields denser connectivity in visual areas more remote from sensory inputs and of lower structural differentiation. Further, we find a systematic relation between laminar patterns on source and target sides of cortical projections, extending previous findings from combined anterograde and retrograde tracing experiments. Going beyond the classical schemes, we statistically assign synapses to target neurons based on anatomical reconstructions, which suggests that layer 4 neurons receive substantial feedback input. Our derived connectivity exhibits a community structure that corresponds more closely with known functional groupings than previous connectivity maps and identifies layer-specific directional differences in cortico-cortical pathways. The resulting network can form the basis for studies relating structure to neural dynamics in mammalian cortex at multiple scales.