Richard R. Carrillo
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard R. Carrillo.
Neural Computation | 2006
Eduardo Ros; Richard R. Carrillo; Eva M. Ortigosa; Boris Barbour; Rodrigo Agís
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the methods application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.
IEEE Transactions on Neural Networks | 2006
Eduardo Ros; Eva M. Ortigosa; Rodrigo Agís; Richard R. Carrillo; Michael Arnold
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
BioSystems | 2008
Richard R. Carrillo; Eduardo Ros; Christian Boucheny; Olivier J.-M. D. Coenen
We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the systems ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts.
International Journal of Neural Systems | 2011
Niceto R. Luque; Jesús Alberto Garrido; Richard R. Carrillo; Silvia Tolu; Eduardo Ros
This work evaluates the capability of a spiking cerebellar model embedded in different loop architectures (recurrent, forward, and forward&recurrent) to control a robotic arm (three degrees of freedom) using a biologically-inspired approach. The implemented spiking network relies on synaptic plasticity (long-term potentiation and long-term depression) to adapt and cope with perturbations in the manipulation scenario: changes in dynamics and kinematics of the simulated robot. Furthermore, the effect of several degrees of noise in the cerebellar input pathway (mossy fibers) was assessed depending on the employed control architecture. The implemented cerebellar model managed to adapt in the three control architectures to different dynamics and kinematics providing corrective actions for more accurate movements. According to the obtained results, coupling both control architectures (forward&recurrent) provides benefits of the two of them and leads to a higher robustness against noise.
IEEE Transactions on Image Processing | 2007
Javier Díaz; Eduardo Ros; Richard R. Carrillo; Alberto Prieto
We present the hardware implementation of a simple, fast technique for depth estimation based on phase measurement. This technique avoids the problem of phase warping and is much less susceptible to camera noise and distortion than standard block-matching stereo systems. The architecture exploits the parallel computing resources of FPGA devices to achieve a computation speed of 65 megapixels per second. For this purpose, we have designed a fine-grain pipeline structure that can be arranged with a customized frame-grabber module to process 52 frames per second at a resolution of 1280times960 pixels. We have measured the systems degradation due to bit quantization errors and compared its performance with other previous approaches. We have also used different Gabor-scale circuits, which can be selected by the user according to the application addressed and typical image structure in the target scenario
PLOS ONE | 2014
Claudia Casellato; Alberto Antonietti; Jesús Alberto Garrido; Richard R. Carrillo; Niceto R. Luque; Eduardo Ros; Alessandra Pedrocchi; Egidio D'Angelo
The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.
international conference on artificial neural networks | 2005
Christian Boucheny; Richard R. Carrillo; Eduardo Ros; Olivier J.-M. D. Coenen
A spiking neural network modeling the cerebellum is presented. The model, consisting of more than 2000 conductance-based neurons and more than 50 000 synapses, runs in real-time on a dual-processor computer. The model is implemented on an event-driven spiking neural network simulator with table-based conductance and voltage computations. The cerebellar model interacts every millisecond with a time-driven simulation of a simple environment in which adaptation experiments are setup. Learning is achieved in real-time using spike time dependent plasticity rules, which drive synaptic weight changes depending on the neurons activity and the timing in the spiking representation of an error signal. The cerebellar model is tested on learning to continuously predict a target position moving along periodical trajectories. This setup reproduces experiments with primates learning the smooth pursuit of visual targets on a screen. The model learns effectively and concurrently different target trajectories. This is true even though the spiking rate of the error representation is very low, reproducing physiological conditions. Hence, we present a complete physiologically relevant spiking cerebellar model that runs and learns in real-time in realistic conditions reproducing psychophysical experiments. This work was funded in part by the EC SpikeFORCE project (IST-2001-35271, www.spikeforce.org).
Frontiers in Computational Neuroscience | 2014
Niceto R. Luque; Jesús Alberto Garrido; Richard R. Carrillo; Egidio D'Angelo; Eduardo Ros
The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.
IEEE Transactions on Neural Networks | 2015
Francisco Naveros; Niceto R. Luque; Jesús Alberto Garrido; Richard R. Carrillo; Mancia Anguita; Eduardo Ros
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
applied reconfigurable computing | 2007
Rodrigo Agís; Eduardo Ros; Javier Díaz; Richard R. Carrillo; Eva M. Ortigosa
The efficient simulation of spiking neural networks (SNN) remains an open challenge. Current SNN computing engines are still far away from simulating systems of millions of neurons efficiently. This contribution describes a computing scheme that takes full advantage of the massive parallel processing resources available at FPGA devices. The computing engine adopts an event-driven simulation scheme and an efficient next-event-to-go searching method to achieve high performance. We have designed a pipelined datapath, in order to compute several events in parallel avoiding idle computing resources. The system is able to compute approximately 2.5 million spikes per second. The whole computing machine is composed only of an FPGA device and five external memory SRAM chips. Therefore, the presented approach is of high interest for simulation experiments that require embedded simulation engines (for instance, in robotic experiments with autonomous agents).