Esin Yavuz
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Esin Yavuz.
Scientific Reports | 2016
Esin Yavuz; James Turner; Thomas Nowotny
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
BMC Neuroscience | 2014
Thomas Nowotny; Alexander J Cope; Esin Yavuz; Marcel Stimberg; Dan F. M. Goodman; James A. R. Marshall; Kevin N. Gurney
Background The GPU enhanced Neuronal Networks (GeNN) framework [1,2] was introduced in 2011 to facilitate the efficient use of graphical processing units (GPUs) as accelerators for neuronal network simulations, in particular as part of computational neuroscience investigations. GeNN is based substantially on code generation for the NVIDIA CUDA application programming interface. Code generation provides decisive advantages over stand-alone simulators in that (i) code can be optimized both for every individual model and for the specific GPU hardware detected at compile time, and (ii) code generation allows to provide a practically limitless number of pre-defined models while the generated simulation code remains as small and efficient as possible. While GeNN is an important step towards facilitating the use of GPU acceleration for computational neuroscience applications it has been designed with expert users in mind. Particular emphasis has been put on flexibility and extendibility and, in case of conflict, these were prioritized over the easy of use. In the work presented here we are aiming to make GeNN and the corresponding GPU acceleration now also available to non-expert users by providing two new interfaces from SpineCreator [3]/SpineML [4] and the Brian 2 simulator [5,6] to GeNN.
BMC Neuroscience | 2014
Esin Yavuz; James Turner; Thomas Nowotny
A major challenge in computational neuroscience is to achieve high performance for real-time simulations of full size brain networks. Recent advances in GPU technology provide massively parallel, low-cost and efficient hardware that is widely available on the computer market. However, the comparatively low-level programming that is necessary to create an efficient GPU-compatible implementation of neuronal network simulations can be challenging, even for otherwise experienced programmers. To resolve this problem a number of tools for simulating spiking neural networks (SNN) on GPUs have been developed [1,2], but using a particular simulator usually comes with restrictions to particular supported neuron models, synapse models or connectivity schemes. Besides being inconvenient, this can unduly influence the path of scientific enquiry. Here we present GeNN (GPU enhance neuronal networks), which builds on NVIDIAs common unified device architecture (CUDA) to enable a more flexible framework. CUDA allows programmers to write C-like code and execute it on NVIDIA’s massively parallel GPUs. However, in order to achieve good performance, it is critical but not trivial to make the right choices on how to parallelize a computational problem, organize its data in memory and optimize the memory access patterns. GeNN is based on the idea that much of this optimization can be cast into heuristics that allow the GeNN meta-compiler to generate optimized GPU code from a basic description of the neuronal network model in a minimal domain specific language of C function calls. For further simplification, this description may also be obtained by translating variables, dynamical equations and parameters from an external simulator into GeNN input files. We are developing this approach for the Brian 2 [3] and SpineCreator/SpineML [4] systems. Using a code generation approach in GeNN has important advantages: 1. A large number of different neuron and synapse models can be provided without performance losses in the final simulation code. 2. The generated simulator code can be optimized for the available GPU hardware and for the specific model. 3. The framework is intrinsically extensible: New GPU optimization strategies, including strategies of other simulators, can be added in the generated code for situations where they are effective. The first release version of GeNN is available at http://sourceforge.net/projects/genn. It has been built and optimized for simulating neuronal networks with an anatomical structure (separate neuron populations with sparse or dense connection patterns with the possibility to use some common learning rules). We have executed performance and scalability tests on an NVIDIA Tesla C2070 GPU with an Intel Xeon(R) E5-2609 CPU running Ubuntu 12.04 LTS. Our results show that as the network size increases, GPU simulations never fail to outperform CPU simulations. But we are also able to demonstrate the performance limits of using GPUs with GeNN under different scenarios of network connectivity, learning rules and simulation parameters, confirming the that GPU acceleration can differ largely depending on the particular details of the model of interest.
international conference on artificial neural networks | 2016
Esin Yavuz; Thomas Nowotny
Animals use various strategies for learning stimulus-reward associations. Computational methods that mimic animal behaviour most commonly interpret learning as a high level phenomenon, in which the pairing of stimulus and reward leads to plastic changes in the final output layers where action selection takes place. Here, we present an alternative input-modulation strategy for forming simple stimulus-response associations based on reward. Our model is motivated by experimental evidence on modulation of early brain regions by reward signalling in the honeybee. The model can successfully discriminate dissimilar odours and generalise across similar odours, like bees do. In the most simplified connectionist description, the new input-modulation learning is shown to be asymptotically equivalent to the standard perceptron.
BMC Neuroscience | 2015
Esin Yavuz; Pascale Maul; Thomas Nowotny
Honeybees can learn and perform complex behavioral tasks despite their small brains that contain less than a million neurons. At the same time they are accessible to physiological experiments and the relatively small number of neurons in their brain lends itself to quite detailed numerical simulations. Bees therefore are a good model system for studying sensory cognition and reinforcement learning. We have shown in earlier work [1] that the anatomy and known electrophysiological properties of the olfactory pathway of insects in combination with spike-timing dependent plasticity (STDP) and lateral inhibition lend themselves to an unsupervised self-organization of synaptic connections for the recognition of odors. Here we extend this model by adding mechanisms of reinforcement learning, as suggested by [2] (see Figure Figure1).1). We employ a three factor learning rule where plasticity is governed by pre-synaptic and post-synaptic activity and a global octopaminergic/dopaminergic reinforcement signal, triggered by a reward. We investigated the role of feed-forward and feedback mechanisms, as well as the role of the connectivity initially achieved by unsupervised STDP. Figure 1 Network diagram for the hypothesized model of reinforcement learning in the honeybee olfactory system. Excitatory connections are shown in black, inhibitory connections in blue and learning synapses in red. Grey arrows represent the abstractions modeled ... Our model is implemented in the GeNN [3] framework, which facilitates the use of GPUs for spiking neural network simulations using a code generation framework. Because of the massive parallelism provided by GPUs, we can simulate tens of thousands of neurons in real time in the sparse firing regime relevant here. We investigated optimization strategies and neuron and synapse model choices for a better performance on the GPU. The model presented here is a stepping-stone to more sophisticated learning models and multi-sensory integration in the Green Brain Project [4], in which we aim to control a flying robot with a simulation of learning and decision making mechanisms in the honeybee related both to the olfactory and visual pathways.
BMC Neuroscience | 2015
Thomas Nowotny; James Turner; Esin Yavuz
Background GeNN (GPU enhanced Neuronal Networks) [1,2] is a software framework that was designed to facilitate the use of GPUs (Graphics Processing Units) for the simulation of spiking neuronal networks. It is built on top of the CUDA (Common Unified Device Architecture) [3] application programming interface provided by NVIDIA Corporation and is entirely based on code generation: Users provide a compact description of a spiking neuronal network model and GeNN generates CUDA and C++ code to simulate it, also taking into account the specifics of the GPU hardware detected at compile time.
Flavour | 2014
Esin Yavuz; Thomas Nowotny
We present an example implementation of a minimalmodel of honeybee olfactory system on massively paral-lel GPU hardware using GPU-specific code generationwith GeNN[1]. This will be a first step to provide a phy-siologically coherent model of the honeybee olfactorysystem to be implemented in real time on fying autono-mous robots for the “Green Brain Project”.The “Green Brain Project” will combine computationalneuroscience modelling, learning and decision theory,modern parallel computing methods and robotics withdata from state-of-the-art neurobiological experimentson cognition in the honeybee Apis mellifera, to buildand deploy a modular model of the honeybee braindescribing detection, classification and learning in theolfactory and optic pathwaysaswellasmulti-sensoryintegration across these sensory modalities. Unlike otherbrain models, which use expensive traditional supercom-puting resources,the ‘Green Brain’ will be implementedon massively parallel, but affordable GPU technology.The ‘Green Brain’ will be deployed for the real-timecontrol of a flying robot able to sense and act autono-mously. This robot testbed will be used to demonstratethe development of new biomimetic control algorithmsfor artificial intelligence and robotics applications.The objective for modelling olfaction in the“GreenBrain Project” will extend previous attempts to model theantennal lobes and their constituent glomeruli (whichencode olfactory cues), the projection neurons and themushroom bodies. Odours are known to have a distribu-ted representation in the antennal lobe, encoded as differ-ential activation levels of glomerular populations. Odourmixtures are represented as a non-trivial combination ofconstituent odours’ representations, and formation oflong-term memories associated with such odour mixtureshas been shown to induce volume changes in glomeruliindicating a cross-inhibitory effect between neural codings[2]. The modelling will also consider how mechanismsmight implement known classification rules, such as in themodels of insect olfactory classification by Huerta et al. [3]and Nowotny et al. [4].In this study, we present some benchmarking results.We perform performance and scalability tests on an NVI-DIA Tesla C2070 GPU with an Intel
conference on biomimetic and biohybrid systems | 2013
Alex Cope; Chelsea Sabo; Esin Yavuz; Kevin N. Gurney; James A. R. Marshall; Thomas Nowotny; Eleni Vasilaki
international symposium on neural networks | 2017
Chelsea Sabo; Esin Yavuz; Alex Cope; Kevin Gumey; Eleni Vasilaki; Thomas Nowotny; James A. R. Marshall
Archive | 2015
Thomas Nowotny; Alan Diamond; James Turner; Alex Cope; Esin Yavuz; Michael Schmuker