Thomas Nowotny
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Nowotny.
Science | 2011
Maria Papadopoulou; Stijn Cassenaer; Thomas Nowotny; Gilles Laurent
A single neuron is responsible for adaptive normalization in an olfactory circuit generating sparse odor representations. Sparse coding presents practical advantages for sensory representations and memory storage. In the insect olfactory system, the representation of general odors is dense in the antennal lobes but sparse in the mushroom bodies, only one synapse downstream. In locusts, this transformation relies on the oscillatory structure of antennal lobe output, feed-forward inhibitory circuits, intrinsic properties of mushroom body neurons, and connectivity between antennal lobe and mushroom bodies. Here we show the existence of a normalizing negative-feedback loop within the mushroom body to maintain sparse output over a wide range of input conditions. This loop consists of an identifiable “giant” nonspiking inhibitory interneuron with ubiquitous connectivity and graded release properties.
Neural Computation | 2004
Ramón Huerta; Thomas Nowotny; Marta Garcı́a-Sánchez; Henry D. I. Abarbanel; Mikhail I. Rabinovich
We propose a theoretical framework for odor classification in the olfactory system of insects. The classification task is accomplished in two steps. The first is a transformation from the antennal lobe to the intrinsic Kenyon cells in the mushroom body. This transformation into a higher-dimensional space is an injective function and can be implemented without any type of learning at the synaptic connections. In the second step, the encoded odors in the intrinsic Kenyon cells are linearly classified in the mushroom body lobes. The neurons that perform this linear classification are equivalent to hyperplanes whose connections are tuned by local Hebbian learning and by competition due to mutual inhibition. We calculate the range of values of activity and size fo the network required to achieve efficient classification within this scheme in insect olfaction. We are able to demonstrate that biologically plausible control mechanisms can accomplish efficient classification of odors.
Biological Cybernetics | 2005
Thomas Nowotny; Ramón Huerta; Henry D. I. Abarbanel; Mikhail I. Rabinovich
We show in a model of spiking neurons that synaptic plasticity in the mushroom bodies in combination with the general fan-in, fan-out properties of the early processing layers of the olfactory system might be sufficient to account for its efficient recognition of odors. For a large variety of initial conditions the model system consistently finds a working solution without any fine-tuning, and is, therefore, inherently robust. We demonstrate that gain control through the known feedforward inhibition of lateral horn interneurons increases the capacity of the system but is not essential for its general function. We also predict an upper limit for the number of odor classes Drosophila can discriminate based on the number and connectivity of its olfactory neurons.
Neural Computation | 2009
Ramón Huerta; Thomas Nowotny
We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses.
PLOS Computational Biology | 2014
David Samu; Anil K. Seth; Thomas Nowotny
In the past two decades some fundamental properties of cortical connectivity have been discovered: small-world structure, pronounced hierarchical and modular organisation, and strong core and rich-club structures. A common assumption when interpreting results of this kind is that the observed structural properties are present to enable the brains function. However, the brain is also embedded into the limited space of the skull and its wiring has associated developmental and metabolic costs. These basic physical and economic aspects place separate, often conflicting, constraints on the brains connectivity, which must be characterized in order to understand the true relationship between brain structure and function. To address this challenge, here we ask which, and to what extent, aspects of the structural organisation of the brain are conserved if we preserve specific spatial and topological properties of the brain but otherwise randomise its connectivity. We perform a comparative analysis of a connectivity map of the cortical connectome both on high- and low-resolutions utilising three different types of surrogate networks: spatially unconstrained (‘random’), connection length preserving (‘spatial’), and connection length optimised (‘reduced’) surrogates. We find that unconstrained randomisation markedly diminishes all investigated architectural properties of cortical connectivity. By contrast, spatial and reduced surrogates largely preserve most properties and, interestingly, often more so in the reduced surrogates. Specifically, our results suggest that the cortical network is less tightly integrated than its spatial constraints would allow, but more strongly segregated than its spatial constraints would necessitate. We additionally find that hierarchical organisation and rich-club structure of the cortical connectivity are largely preserved in spatial and reduced surrogates and hence may be partially attributable to cortical wiring constraints. In contrast, the high modularity and strong s-core of the high-resolution cortical network are significantly stronger than in the surrogates, underlining their potential functional relevance in the brain.
Neural Computation | 2007
Thomas Nowotny; Attila Szücs; Rafael Levi; Allen I. Selverston
In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.
Journal of Computational Neuroscience | 2003
Thomas Nowotny; Mikhail I. Rabinovich; Ramón Huerta; Henry D. I. Abarbanel
Sensory information is represented in a spatio-temporal code in the antennal lobe, the first processing stage of the olfactory system of insects. We propose a novel mechanism for decoding this information in the next processing stage, the mushroom body. The Kenyon cells in the mushroom body of insects exhibit lateral excitatory connections at their axons. We demonstrate that slow lateral excitation between Kenyon cells allows one to decode sequences of activity in the antennal lobe. We are thus able to clarify the role of the existing connections as well as to demonstrate a novel mechanism for decoding temporal information in neuronal systems. This mechanism complements the variety of existing temporal decoding schemes. It seems that neuronal systems not only have a rich variety of code types but also quite a diversity of algorithms for transforming different codes into each other.
Nature Protocols | 2011
Ildikó Kemenes; Vincenzo Marra; Michael Crossley; David Samu; Kevin Staras; György Kemenes; Thomas Nowotny
Dynamic clamp is a powerful method that allows the introduction of artificial electrical components into target cells to simulate ionic conductances and synaptic inputs. This method is based on a fast cycle of measuring the membrane potential of a cell, calculating the current of a desired simulated component using an appropriate model and injecting this current into the cell. Here we present a dynamic clamp protocol using free, fully integrated, open-source software (StdpC, for spike timing-dependent plasticity clamp). Use of this protocol does not require specialist hardware, costly commercial software, experience in real-time operating systems or a strong programming background. The software enables the configuration and operation of a wide range of complex and fully automated dynamic clamp experiments through an intuitive and powerful interface with a minimal initial lead time of a few hours. After initial configuration, experimental results can be generated within minutes of establishing cell recording.
Biological Cybernetics | 2003
Thomas Nowotny; Ramón Huerta
Abstract.In any scientific theory, the conceptual framework already determines the nature and possible scope of the results. Oversimplification prevents an adequate description of the system, whereas too detailed a description obscures the fundamental principles behind the observed phenomena in addition to misspending time and resources. In theoretical neuroscience, this is an important issue because the description level varies widely from detailed biophysical descriptions to abstract computational models. We discuss the question of the appropriate modeling level in the context of a recent report on synchrony in iteratively constructed feed-forward networks of rat cortex pyramidal neuron somata (Reyes, 2003).
Neural Computation | 2012
Ramón Huerta; Shankar Vembu; José M. Amigó; Thomas Nowotny; Charles Elkan
The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems. These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches.
Collaboration
Dive into the Thomas Nowotny's collaboration.
Commonwealth Scientific and Industrial Research Organisation
View shared research outputsCommonwealth Scientific and Industrial Research Organisation
View shared research outputs