Serafim Rodrigues
Plymouth University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Serafim Rodrigues.
Philosophical Transactions of the Royal Society A | 2009
Frank Marten; Serafim Rodrigues; Oscar Benjamin; Mark P. Richardson; John R. Terry
In this paper, we introduce a modification of a mean-field model used to describe the brains electrical activity as recorded via electroencephalography (EEG). The focus of the present study is to understand the mechanisms giving rise to the dynamics observed during absence epilepsy, one of the classical generalized syndromes. A systematic study of the data from a number of different subjects with absence epilepsy demonstrates a wide variety of dynamical phenomena in the recorded EEG. In addition to the classical spike and wave activity, there may be polyspike and wave, wave spike or even no discernible spike–wave onset during seizure events. The model we introduce is able to capture all of these different phenomena and we describe the bifurcations giving rise to these different types of seizure activity. We argue that such a model may provide a useful clinical tool for classifying different subclasses of absence epilepsy.
Journal of Computational Neuroscience | 2009
Serafim Rodrigues; David A W Barton; Robert Szalai; Oscar Benjamin; Mark P. Richardson; John R. Terry
In this paper we present a detailed theoretical analysis of the onset of spike-wave activity in a model of human electroencephalogram (EEG) activity, relating this to clinical recordings from patients with absence seizures. We present a complete explanation of the transition from inter-ictal activity to spike and wave using a combination of bifurcation theory, numerical continuation and techniques for detecting the occurrence of inflection points in systems of delay differential equations (DDEs). We demonstrate that the initial transition to oscillatory behaviour occurs as a result of a Hopf bifurcation, whereas the addition of spikes arises as a result of an inflection point of the vector field. Strikingly these findings are consistent with EEG data recorded from patients with absence seizures and we present a discussion of the clinical significance of these results, suggesting potential new techniques for detection and anticipation of seizures.
Journal of Mathematical Biology | 2013
Mathieu Desroches; Maciej Krupa; Serafim Rodrigues
A technique is presented, based on the differential geometry of planar curves, to evaluate the excitability threshold of neuronal models. The aim is to determine regions of the phase plane where solutions to the model equations have zero local curvature, thereby defining a zero-curvature (inflection) set that discerns between sub-threshold and spiking electrical activity. This transition can arise through a Hopf bifurcation, via the so-called canard explosion that happens in an exponentially small parameter variation, and this is typical for a large class of planar neuronal models (FitzHugh–Nagumo, reduced Hodgkin–Huxley), namely, type II neurons (resonators). This transition can also correspond to the crossing of the stable manifold of a saddle equilibrium, in the case of type I neurons (integrators). We compute inflection sets and study how well they approximate the excitability threshold of these neuron models, that is, both in the canard and in the non-canard regime, using tools from invariant manifold theory and singularity theory. With the latter, we investigate the topological changes that inflection sets undergo upon parameter variation. Finally, we show that the concept of inflection set gives a good approximation of the threshold in both the so-called resonator and integrator neuronal cases.
Proceedings of the National Academy of Sciences of the United States of America | 2013
Jesús M. Cortés; Mathieu Desroches; Serafim Rodrigues; Romain Veltz; Miguel A. Muñoz; Terrence J. Sejnowski
Significance Short-term synaptic plasticity contributes to the balance and regulation of brain networks from milliseconds to several minutes. In this paper we report the existence of a route to chaos in the Tsodyks and Markram model of short-term synaptic plasticity. The chaotic region corresponds to what in mathematics is called Shilnikov chaos, an unstable manifold that strongly modifies the shape of trajectories and induces highly irregular transient dynamics, even in the absence of noise. The interplay between the Shilnikov chaos and stochastic effects may give rise to some of the complex dynamics observed in neural systems such as transitions between up and down states. Short-term synaptic plasticity strongly affects the neural dynamics of cortical networks. The Tsodyks and Markram (TM) model for short-term synaptic plasticity accurately accounts for a wide range of physiological responses at different types of cortical synapses. Here, we report a route to chaotic behavior via a Shilnikov homoclinic bifurcation that dynamically organizes some of the responses in the TM model. In particular, the presence of such a homoclinic bifurcation strongly affects the shape of the trajectories in the phase space and induces highly irregular transient dynamics; indeed, in the vicinity of the Shilnikov homoclinic bifurcation, the number of population spikes and their precise timing are unpredictable and highly sensitive to the initial conditions. Such an irregular deterministic dynamics has its counterpart in stochastic/network versions of the TM model: The existence of the Shilnikov homoclinic bifurcation generates complex and irregular spiking patterns and—acting as a sort of springboard—facilitates transitions between the down-state and unstable periodic orbits. The interplay between the (deterministic) homoclinic bifurcation and stochastic effects may give rise to some of the complex dynamics observed in neural systems.
Biological Cybernetics | 2010
Serafim Rodrigues; Anton V. Chizhov; Frank Marten; John R. Terry
We present two alternative mappings between macroscopic neuronal models and a reduction of a conductance-based model. These provide possible explanations of the relationship between parameters of these two different approaches to modelling neuronal activity. Obtaining a physical interpretation of neural-mass models is of fundamental importance as they could provide direct and accessible tools for use in diagnosing neurological conditions. Detailed consideration of the assumptions required for the validity of each mapping elucidates strengths and weaknesses of each macroscopic model and suggests improvements for future development.
Biological Cybernetics | 2010
Serafim Rodrigues; David A W Barton; Frank Marten; Moses Kibuuka; Gonzalo Alarcon; Mark P. Richardson; John R. Terry
In this article, we present a method for tracking changes in curvature of limit cycle solutions that arise due to inflection points. In keeping with previous literature, we term these changes false bifurcations, as they appear to be bifurcations when considering a Poincaré section that is tangent to the solution, but in actual fact the deformation of the solution occurs smoothly as a parameter is varied. These types of solutions arise commonly in electroencephalogram models of absence seizures and correspond to the formation of spikes in these models. Tracking these transitions in parameter space allows regions to be defined corresponding to different types of spike and wave dynamics, that may be of use in clinical neuroscience as a means to classify different subtypes of the more general syndrome.
PLOS ONE | 2014
Ildefonso M. De la Fuente; Jesús M. Cortés; Edelmira Valero; Mathieu Desroches; Serafim Rodrigues; Iker Malaina; Luis Martínez
Biochemical energy is the fundamental element that maintains both the adequate turnover of the biomolecular structures and the functional metabolic viability of unicellular organisms. The levels of ATP, ADP and AMP reflect roughly the energetic status of the cell, and a precise ratio relating them was proposed by Atkinson as the adenylate energy charge (AEC). Under growth-phase conditions, cells maintain the AEC within narrow physiological values, despite extremely large fluctuations in the adenine nucleotides concentration. Intensive experimental studies have shown that these AEC values are preserved in a wide variety of organisms, both eukaryotes and prokaryotes. Here, to understand some of the functional elements involved in the cellular energy status, we present a computational model conformed by some key essential parts of the adenylate energy system. Specifically, we have considered (I) the main synthesis process of ATP from ADP, (II) the main catalyzed phosphotransfer reaction for interconversion of ATP, ADP and AMP, (III) the enzymatic hydrolysis of ATP yielding ADP, and (IV) the enzymatic hydrolysis of ATP providing AMP. This leads to a dynamic metabolic model (with the form of a delayed differential system) in which the enzymatic rate equations and all the physiological kinetic parameters have been explicitly considered and experimentally tested in vitro. Our central hypothesis is that cells are characterized by changing energy dynamics (homeorhesis). The results show that the AEC presents stable transitions between steady states and periodic oscillations and, in agreement with experimental data these oscillations range within the narrow AEC window. Furthermore, the model shows sustained oscillations in the Gibbs free energy and in the total nucleotide pool. The present study provides a step forward towards the understanding of the fundamental principles and quantitative laws governing the adenylate energy system, which is a fundamental element for unveiling the dynamics of cellular life.
Frontiers in Computational Neuroscience | 2013
Peter beim Graben; Serafim Rodrigues
We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced three-compartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential (DFP) that contributes to the local field potential (LFP) of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the “open-field” configuration of the DFP around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire (LIF) models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. (2008), and conclude that our biophysically motivated approach yields substantial improvement.
Siam Review | 2016
Mathieu Desroches; Antoni Guillamon; Enrique Ponce; Rafael Prohens; Serafim Rodrigues; Antonio E. Teruel
Canard-induced phenomena have been extensively studied in the last three decades, from both the mathematical and the application viewpoints. Canards in slow-fast systems with (at least) two slow variables, especially near folded-node singularities, give an essential generating mechanism for mixed-mode oscillations (MMOs) in the framework of smooth multiple timescale systems. There is a wealth of literature on such slow-fast dynamical systems and many models displaying canard-induced MMOs, particularly in neuroscience. In parallel, since the late 1990s several papers have shown that the canard phenomenon can be faithfully reproduced with piecewise-linear (PWL) systems in two dimensions, although very few results are available in the three-dimensional case. The present paper aims to bridge this gap by analyzing canonical PWL systems that display folded singularities, primary and secondary canards, with a similar control of the maximal winding number as in the smooth case. We also show that the singular phase portraits are compatible in both frameworks. Finally, we show using an example how to construct a (linear) global return and obtain robust PWL MMOs.
Neural Networks | 2017
Giovanni Sirio Carmantini; Peter beim Graben; Mathieu Desroches; Serafim Rodrigues
Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.