Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörg Bornschein is active.

Publication


Featured researches published by Jörg Bornschein.


PLOS Computational Biology | 2013

Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

Jörg Bornschein; Marc Henniges; Jörg Lücke

Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.


PLOS ONE | 2015

Nonlinear spike-and-slab sparse coding for interpretable image encoding.

Jacquelyn A. Shelton; Abdul-Saboor Sheikh; Jörg Bornschein; Philip Sterne; Jörg Lücke

Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.


international conference on latent variable analysis and signal separation | 2010

Binary sparse coding

Marc Henniges; Gervasio Puertas; Jörg Bornschein; Julian Eggert; Jörg Lücke

We study a sparse coding learning algorithm that allows for a simultaneous learning of the data sparseness and the basis functions. The algorithm is derived based on a generative model with binary latent variables instead of continuous-valued latents as used in classical sparse coding. We apply a novel approach to perform maximum likelihood parameter estimation that allows for an efficient estimation of all model parameters. The approach is a new form of variational EM that uses truncated sums instead of factored approximations to the intractable posterior distributions. In contrast to almost all previous versions of sparse coding, the resulting learning algorithm allows for an estimation of the optimal degree of sparseness along with an estimation of the optimal basis functions. We can thus monitor the time-course of the data sparseness during the learning of basis functions. In numerical experiments on artificial data we show that the algorithm reliably extracts the true underlying basis functions along with noise level and data sparseness. In applications to natural images we obtain Gabor-like basis functions along with a sparseness estimate. If large numbers of latent variables are used, the obtained basis functions take on properties of simple cell receptive fields that classical sparse coding or ICA-approaches do not reproduce.


international conference on learning representations | 2015

Reweighted Wake-Sleep

Jörg Bornschein; Yoshua Bengio


arXiv: Learning | 2015

Towards Biologically Plausible Deep Learning

Yoshua Bengio; Dong-Hyun Lee; Jörg Bornschein; Zhouhan Lin


ICA | 2010

Binary Sparse Coding

Marc Henniges; Gervasio Puertas; Jörg Bornschein; Julian Eggert; Jörg Lücke


Archive | 2010

Approximate EM Learning on Large Computer Clusters

Jörg Bornschein; Zhenwhen Dai


neural information processing systems | 2017

Variational Memory Addressing in Generative Models

Jörg Bornschein; Andriy Mnih; Daniel Zoran; Danilo Jimenez Rezende


Archive | 2015

Training opposing directed models using geometric mean matching.

Jörg Bornschein; Samira Shabanian; Asja Fischer; Yoshua Bengio


arXiv: Learning | 2015

Training Bidirectional Helmholtz Machines

Jörg Bornschein; Samira Shabanian; Asja Fischer; Yoshua Bengio

Collaboration


Dive into the Jörg Bornschein's collaboration.

Top Co-Authors

Avatar

Jörg Lücke

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Marc Henniges

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gervasio Puertas

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdul Saboor Sheikh

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Abdul-Saboor Sheikh

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Philip Sterne

Frankfurt Institute for Advanced Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge