Jugoslava Acimovic
Tampere University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jugoslava Acimovic.
Eurasip Journal on Bioinformatics and Systems Biology | 2011
Jugoslava Acimovic; Tuomo Mäki-Marttunen; Riikka Havela; Heidi Teppola; Marja-Leena Linne
We simulate the growth of neuronal networks using the two recently published tools, NETMORPH and CX3D. The goals of the work are (1) to examine and compare the simulation tools, (2) to construct a model of growth of neocortical cultures, and (3) to characterize the changes in network connectivity during growth, using standard graph theoretic methods. Parameters for the neocortical culture are chosen after consulting both the experimental and the computational work presented in the literature. The first (three) weeks in culture are known to be a time of development of extensive dendritic and axonal arbors and establishment of synaptic connections between the neurons. We simulate the growth of networks from day 1 to day 21. It is shown that for the properly selected parameters, the simulators can reproduce the experimentally obtained connectivity. The selected graph theoretic methods can capture the structural changes during growth.
PLOS ONE | 2013
Tuomo Mäki-Marttunen; Jugoslava Acimovic; Keijo Ruohonen; Marja-Leena Linne
The question of how the structure of a neuronal network affects its functionality has gained a lot of attention in neuroscience. However, the vast majority of the studies on structure-dynamics relationships consider few types of network structures and assess limited numbers of structural measures. In this in silico study, we employ a wide diversity of network topologies and search among many possibilities the aspects of structure that have the greatest effect on the network excitability. The network activity is simulated using two point-neuron models, where the neurons are activated by noisy fluctuation of the membrane potential and their connections are described by chemical synapse models, and statistics on the number and quality of the emergent network bursts are collected for each network type. We apply a prediction framework to the obtained data in order to find out the most relevant aspects of network structure. In this framework, predictors that use different sets of graph-theoretic measures are trained to estimate the activity properties, such as burst count or burst length, of the networks. The performances of these predictors are compared with each other. We show that the best performance in prediction of activity properties for networks with sharp in-degree distribution is obtained when the prediction is based on clustering coefficient. By contrast, for networks with broad in-degree distribution, the maximum eigenvalue of the connectivity graph gives the most accurate prediction. The results shown for small () networks hold with few exceptions when different neuron models, different choices of neuron population and different average degrees are applied. We confirm our conclusions using larger () networks as well. Our findings reveal the relevance of different aspects of network structure from the viewpoint of network excitability, and our integrative method could serve as a general framework for structure-dynamics studies in biosciences.
Frontiers in Computational Neuroscience | 2011
Tuomo Mäki-Marttunen; Jugoslava Acimovic; Matti Nykter; Juha Kesseli; Keijo Ruohonen; Olli Yli-Harja; Marja-Leena Linne
Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.
Frontiers in Neuroanatomy | 2015
Jugoslava Acimovic; Tuomo Mäki-Marttunen; Marja-Leena Linne
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
Frontiers in Neuroinformatics | 2018
Tiina Manninen; Jugoslava Acimovic; Riikka Havela; Heidi Teppola; Marja-Leena Linne
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.
BMC Neuroscience | 2011
Jugoslava Acimovic; Tuomo Mäki-Marttunen; Marja-Leena Linne
Networks of neurons possess distinct structural organization that constraints generated activity patterns, and consequently, the functions of the system. The emergence of the network structure can be understood by studying the rules that govern growth of neurons and their self-organization into neuronal circuits. We analyze these rules using a computational model of growth developed for dissociated neocortical cultures. Compared to the growth in vivo, the cultures represent simplified two dimensional systems that still possess the intrinsic properties of single neurons although they lack the natural extracellular environment present in vivo. This setup provides a possibility to address in depth the selected mechanisms that affect neuronal growth. The collected structural data (through staining and microscopy) and electrophysiological data (using microelectrode arrays) facilitate validation of computational models. Neuronal growth in dissociated cultures has
BMC Neuroscience | 2009
Jugoslava Acimovic; Heidi Teppola; Jyrki Selinummi; Marja-Leena Linne
Neurons cultured in vitro provide a particularly promising experimental system for the analysis of properties, such as information coding, transmission, and learning, that are conventionally associated with biological neural networks. In these systems, isolated cells are placed on top of a recording plate (microelectrode array, MEA), where they spontaneously develop a random connectivity structure. Typical cultures consist of several thousands of neurons and the connectivity density varies from very low at the beginning of an experimental trial to high in mature cultures. In the absence of external stimuli, a culture exhibits a typical pattern of spontaneous activity, alternating intervals of slow spiking and bursting with the transition intervals of increasing activity. Spontaneous activity recorded in the cultures of rat cortical cells is described in [1,2] and an explanation of the phenomena is proposed in [3]. The behavior in the presence of external stimuli is also reported in the literature, for example, the adaptation exhibited in the presence of frequent and rare stimuli is assessed experimentally and through a computational model in [4]. The present work is related to the previously reported study [3] in which an image-processing algorithm is used to detect some structural parameters of cell cultures. A typical result from this study is illustrated in Figure 1. The original image of cultured cells on top of recording plate is shown in panel A, one of its segments in B, and the result of the applied algorithm in C. The blue pattern on panel C corresponds to cells. This approach, in general, enables automated estimation of parameters like the number of cells, or the average density of connections between the cells. Here, we propose a computational model based on the study in [3]. The neural network model is composed of leaky integrate-and-fire neurons, connected in a recurrent network as shown in panel D. The network is fed with the quantitative information about the structure of the cell cultures. Such model, although approximate, captures well the essential properties of the topologies observed in cultures. The presented model is used to reproduce and analyze network behavior observed in the absence of external stimuli. The structural parameters are estimated in different phases of development to closely relate them to the observed behavior. The relation between the network topology and behavior is systematically examined throughout this study.
BMC Neuroscience | 2015
Jugoslava Acimovic; Tuomo Mäki-Marttunen; Marja-Leena Linne
BMC Neuroscience | 2013
Tuomo Mäki-Marttunen; Jugoslava Acimovic; Keijo Ruohonen; Marja-Leena Linne
Archive | 2011
Jugoslava Acimovic; Tuomo Mäki-Marttunen; Riikka Havela; Heidi Teppola; Marja-Leena Linne