Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Kiefel is active.

Publication


Featured researches published by Martin Kiefel.


european conference on computer vision | 2014

Human Pose Estimation with Fields of Parts

Martin Kiefel; Peter V. Gehler

This paper proposes a new formulation of the human pose estimation problem. We present the Fields of Parts model, a binary Conditional Random Field model designed to detect human body parts of articulated people in single images.


computer vision and pattern recognition | 2016

Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

Varun Jampani; Martin Kiefel; Peter V. Gehler

Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of highdimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters.


computer vision and pattern recognition | 2016

Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Jun Xie; Martin Kiefel; Ming-Ting Sun; Andreas Geiger

Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.


computer vision and pattern recognition | 2017

Unite the People: Closing the Loop Between 3D and 2D Human Representations

Christoph Lassner; Javier Romero; Martin Kiefel; Federica Bogo; Michael J. Black; Peter V. Gehler

3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits in-the-wild. However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes.


european conference on computer vision | 2016

Superpixel Convolutional Networks using Bilateral Inceptions

Raghudeep Gadde; Varun Jampani; Martin Kiefel; Daniel Kappler; Peter V. Gehler

In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new “bilateral inception” module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (\(1\times 1\) convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.


acm multimedia | 2016

Barrista: Caffe Well-Served

Christoph Lassner; Daniel Kappler; Martin Kiefel; Peter V. Gehler

The caffe framework is one of the leading deep learning toolboxes in the machine learning and computer vision community. While it offers efficiency and configurability, it falls short of a full interface to Python. With increasingly involved procedures for training deep networks and reaching depths of hundreds of layers, creating configuration files and keeping them consistent becomes an error prone process. We introduce the barrista framework, offering full, pythonic control over caffe. It separates responsibilities and offers code to solve frequently occurring tasks for pre-processing, training and model inspection. It is compatible to all caffe versions since mid 2015 and can import and export .prototxt files. Examples are included, e.g., a deep residual network implemented in only 172 lines (for arbitrary depths), comparing to 2320 lines in the official implementation for the equivalent model.


german conference on pattern recognition | 2014

Probabilistic Progress Bars

Martin Kiefel; Christian J. Schuler; Philipp Hennig

Predicting the time at which the integral over a stochastic process reaches a target level is a value of interest in many applications. Often, such computations have to be made at low cost, in real time. As an intuitive example that captures many features of this problem class, we choose progress bars, a ubiquitous element of computer user interfaces. These predictors are usually based on simple point estimators, with no error modelling. This leads to fluctuating behaviour confusing to the user. It also does not provide a distribution prediction (risk values), which are crucial for many other application areas. We construct and empirically evaluate a fast, constant cost algorithm using a Gauss-Markov process model which provides more information to the user.


conference on decision and control | 2011

Stochastic nonlinear open-loop feedback control with guaranteed error bounds using compactly supported wavelets

Achim Hekler; Martin Kiefel; Uwe D. Hanebeck

In model predictive control, a high quality of control can only be achieved if the model of the system reflects the real-world process as precisely as possible. Therefore, the controller should be capable of both handling a nonlinear system description and systematically incorporating uncertainties affecting the system. Since stochastic nonlinear model predictive control (SNMPC) problems in general cannot be solved in closed form, either the system model or the occurring densities have to be approximated. In this paper, we present an SNMPC framework that approximates the densities and the reward function by their wavelet expansions. Due to the few requirements on the shape and family of the densities or reward function, the presented technique can be applied to a large class of SNMPC problems. For accelerating the optimization, we additionally present an efficient technique, so-called dynamic thresholding, which neglects insignificant coefficients, while at the same time guaranteeing that the optimal control input is still obtained. The capabilities of the proposed approach are demonstrated by simulations and comparisons to a particle-based SNMPC method are conducted.


Journal of Machine Learning Research | 2013

Quasi-Newton methods: a new direction

Philipp Hennig; Martin Kiefel


Untitled Event | 2011

Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance

Peter V. Gehler; Carsten Rother; Martin Kiefel; Lumin Zhang; Bernhard Schölkopf

Collaboration


Dive into the Martin Kiefel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Achim Hekler

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carsten Rother

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge