Jörg Lücke
University of Oldenburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jörg Lücke.
Neural Computation | 2004
Jörg Lücke; Christoph von der Malsburg
We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated by neuroanatomical and neurophysiological findings, the utilized dynamics is based on a simple model of a spiking neuron with refractory period, fixed random excitatory interconnection within minicolumns, and instantaneous inhibition within one macrocolumn. A stability analysis of the systems dynamical equations shows that minicolumns can act as monolithic functional units for purposes of critical, fast decisions and learning. Oscillating inhibition (in the gamma frequency range) leads to a phase-coupled population rate code and high sensitivity to small imbalances in minicolumn inputs. Minicolumns are shown to be able to organize their collective inputs without supervision by Hebbian plasticity into selective receptive field shapes, thereby becoming classifiers for input patterns. Using the bars test, we critically compare our systems performance with that of others and demonstrate its ability for distributed neural coding.
Neural Computation | 2009
Jörg Lücke
We study a dynamical model of processing and learning in the visual cortex, which reflects the anatomy of V1 cortical columns and properties of their neuronal receptive fields. Based on recent results on the fine-scale structure of columns in V1, we model the activity dynamics in subpopulations of excitatory neurons and their interaction with systems of inhibitory neurons. We find that a dynamical model based on these aspects of columnar anatomy can give rise to specific types of computations that result in self-organization of afferents to the column. For a given type of input, self-organization reliably extracts the basic input components represented by neuronal receptive fields. Self-organization is very noise tolerant and can robustly be applied to different types of input. To quantitatively analyze the systems component extraction capabilities, we use two standard benchmarks: the bars test and natural images. In the bars test, the system shows the highest noise robustness reported so far. If natural image patches are used as input, self-organization results in Gabor-like receptive fields. In quantitative comparison with in vivo measurements, we find that the obtained receptive fields capture statistical properties of V1 simple cells that algorithms such as independent component analysis or sparse coding do not reproduce.
PLOS Computational Biology | 2012
Christian Keck; Cristina Savin; Jörg Lücke
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Neural Computation | 2008
Jörg Lücke; Christian Keck; Christoph von der Malsburg
We describe a neural network able to rapidly establish correspondence between neural feature layers. Each of the networks two layers consists of interconnected cortical columns, and each column consists of inhibitorily coupled subpopulations of excitatory neurons. The dynamics of the system builds on a dynamic model of a single column, which is consistent with recent experimental findings. The network realizes dynamic links between its layers with the help of specialized columns that evaluate similarities between the activity distributions of local feature cell populations, are subject to a topology constraint, and can gate the transfer of feature information between the neural layers. The system can robustly be applied to natural images, and correspondences are found in time intervals estimated to be smaller than 100 ms in physiological terms.
Neural Networks | 2004
Jörg Lücke
We study self-organization of receptive fields (RFs) of cortical minicolumns. Input driven self-organization is induced by Hebbian synaptic plasticity of afferent fibers to model minicolumns based on spiking neurons and background oscillations. If input in the form of spike patterns is presented during learning, the RFs of minicolumns hierarchically specialize to increasingly small groups of similar RFs in a series of nested group subdivisions. In a number of experiments we show that the system finds clusters of similar spike patterns, that it is capable of evenly cover the input space if the input is continuously distributed, and that it extracts basic features from input consisting of superpositions of spike patterns. With a continuous version of the bars test we, furthermore, demonstrate the systems ability to evenly cover the space of extracted basic input features. The hierarchical nature and its flexibility with respect to input distinguishes the presented type of self-organization from others including similar but non-hierarchical self-organization as discussed in [Lucke J., & von der Malsburg, C. (2004). Rapid processing and unsupervised learning in a model of the cortical macrocolumn. Neural Computation 16, 501-533]. The capabilities of the presented system match crucial properties of the plasticity of cortical RFs and we suggest it as a model for their hierarchical formation.
international conference on artificial neural networks | 2005
Jörg Lücke; Jan D. Bouecke
We present a system of differential equations which abstractly models neural dynamics and synaptic plasticity of a cortical macrocolumn. The equations assume inhibitory coupling between minicolumn activities and Hebbian type synaptic plasticity of afferents to the minicolumns. If input in the form of activity patterns is presented, self-organization of receptive fields (RFs) of the minicolumns is induced. Self-organization is shown to appropriately classify input patterns or to extract basic constituents form input patterns consisting of superpositions of subpatterns. The latter is demonstrated using the bars benchmark test. The dynamics was motivated by the more explicit model suggested in [1] but represents a much compacter, continuous, and easier to analyze dynamic description.
Cytometry Part A | 2014
Carl-Magnus Svensson; Solveigh Krusekopf; Jörg Lücke; Marc Thilo Figge
Personalized medicine is a modern healthcare approach where information on each persons unique clinical constitution is exploited to realize early disease intervention based on more informed medical decisions. The application of diagnostic tools in combination with measurement evaluation that can be performed in a reliable and automated fashion plays a key role in this context. As the progression of various cancer diseases and the effectiveness of their treatments are related to a varying number of tumor cells that circulate in blood, the determination of their extremely low numbers by liquid biopsy is a decisive prognostic marker. To detect and enumerate circulating tumor cells (CTCs) in a reliable and automated fashion, we apply methods from machine learning using a naive Bayesian classifier (NBC) based on a probabilistic generative mixture model. Cells are collected with a functionalized medical wire and are stained for fluorescence microscopy so that their color signature can be used for classification through the construction of Red‐Green‐Blue (RGB) color histograms. Exploiting the information on the fluorescence signature of CTCs by the NBC does not only allow going beyond previous approaches but also provides a method of unsupervised learning that is required for unlabeled training data. A quantitative comparison with a state‐of‐the‐art support vector machine, which requires labeled data, demonstrates the competitiveness of the NBC method.
international conference on artificial neural networks | 2005
Jörg Lücke
Based on elementary assumptions on the interconnectivity within a cortical macrocolumn we derive a differential equation system which models the mean neural activities of its minicolumns. A stability analysis shows a rich diversity of stationary points and sensitive behavior with respect to a parameter of inhibition. If this parameter is continuously changed, the system shows the same types of bifurcations as the macrocolumn model presented in [1] which is based on explicitly defined interconnectivity and spiking neurons. Due to this behavior the macrocolumn is able to make very sensitive decisions with respect to external input. The decision making process can be used to induce self-organization of receptive fields as is shown in [2].
PLOS Computational Biology | 2013
Jörg Bornschein; Marc Henniges; Jörg Lücke
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
international conference on artificial neural networks | 2007
Jörg Lücke
We present a dynamical model of processing and learning in the visual cortex, which reflects the anatomy of V1 cortical columns and properties of their neuronal receptive fields (RFs). The model is described by a set of coupled differential equations and learns by self-organizing the RFs of its computational units - sub-populations of excitatory neurons. If natural image patches are presented as input, self-organization results in Gabor-like RFs. In quantitative comparison with in vivo measurements, we find that these RFs capture statistical properties of V1 simple-cells that learning algorithms such as ICA and sparse coding fail to reproduce.