Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hannes Nickisch is active.

Publication


Featured researches published by Hannes Nickisch.


computer vision and pattern recognition | 2009

Learning to detect unseen object classes by between-class attribute transfer

Christoph H. Lampert; Hannes Nickisch; Stefan Harmeling

We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Oshersons classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Attribute-Based Classification for Zero-Shot Visual Object Categorization

Christoph H. Lampert; Hannes Nickisch; Stefan Harmeling

We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the objects color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.


Magnetic Resonance in Medicine | 2009

Optimization of k‐space trajectories for compressed sensing by Bayesian experimental design

Matthias W. Seeger; Hannes Nickisch; R Pohmann; Bernhard Schölkopf

The optimization of k‐space sampling for nonlinear sparse MRI reconstruction is phrased as a Bayesian experimental design problem. Bayesian inference is approximated by a novel relaxation to standard signal processing primitives, resulting in an efficient optimization algorithm for Cartesian and spiral trajectories. On clinical resolution brain image data from a Siemens 3T scanner, automatically optimized trajectories lead to significantly improved images, compared to standard low‐pass, equispaced, or variable density randomized designs. Insights into the nonlinear design optimization problem for MRI are given. Magn Reson Med, 2010.


Siam Journal on Imaging Sciences | 2011

Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models

Matthias W. Seeger; Hannes Nickisch

Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the densitys mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex if and only if posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images. Parts of this work have been presented at conferences [M. Seeger, H. Nickisch, R. Pohmann, and B. Scholkopf, in Advances in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, eds., Curran Associates, Red Hook, NY, 2009, pp. 1441-1448; H. Nickisch and M. Seeger, in Proceedings of the 26th International Conference on Machine Learning, L. Bottou and M. Littman, eds., Omni Press, Madison, WI, 2009, pp. 761-768].


Magnetic Resonance in Medicine | 2013

Blind retrospective motion correction of MR images.

Alexander Loktyushin; Hannes Nickisch; R Pohmann; Bernhard Schölkopf

Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient‐based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed.


international conference on machine learning | 2009

Convex variational Bayesian inference for large scale generalized linear models

Hannes Nickisch; Matthias W. Seeger

We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary super-Gaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps.


International Journal of Computer Vision | 2012

User-Centric Learning and Evaluation of Interactive Segmentation Systems

Pushmeet Kohli; Hannes Nickisch; Carsten Rother; Christoph Rhemann

Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. In this paper, we study the problem of evaluating and learning interactive segmentation systems which are extensively used in the real world. The key questions in this context are how to measure (1) the effort associated with a user interaction, and (2) the quality of the segmentation result as perceived by the user. We conduct a user study to analyze user behavior and answer these questions. Using the insights obtained from these experiments, we propose a framework to evaluate and learn interactive segmentation systems which brings the user in the loop. The framework is based on the use of an active robot user—a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.


indian conference on computer vision, graphics and image processing | 2010

Learning an interactive segmentation system

Hannes Nickisch; Carsten Rother; Pushmeet Kohli; Christoph Rhemann

Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user -- a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.


Machine Learning | 2012

Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis

Wendelin Böhmer; Steffen Grünewälder; Hannes Nickisch; Klaus Obermayer

Without non-linear basis functions many problems can not be solved by linear algorithms. This article proposes a method to automatically construct such basis functions with slow feature analysis (SFA). Non-linear optimization of this unsupervised learning method generates an orthogonal basis on the unknown latent space for a given time series. In contrast to methods like PCA, SFA is thus well suited for techniques that make direct use of the latent space. Real-world time series can be complex, and current SFA algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to develop a kernelized SFA algorithm which provides a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. We hypothesize that our algorithm generates a feature space that resembles a Fourier basis in the unknown space of latent variables underlying a given real-world time series. We evaluate this hypothesis at the example of a vowel classification task in comparison to sparse kernel PCA. Our results show excellent classification accuracy and demonstrate the superiority of kernel SFA over kernel PCA in encoding latent variables.


european conference on machine learning | 2011

Regularized sparse Kernel slow feature analysis

Wendelin Böhmer; Steffen Grünewälder; Hannes Nickisch; Klaus Obermayer

This paper develops a kernelized slow feature analysis (SFA) algorithm. SFA is an unsupervised learning method to extract features which encode latent variables from time series. Generative relationships are usually complex, and current algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to provide a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small but complex data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. Versatility and performance of our method are demonstrated on audio and video data sets.

Collaboration


Dive into the Hannes Nickisch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge