Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hanno Ackermann is active.

Publication


Featured researches published by Hanno Ackermann.


computer vision and pattern recognition | 2010

Multilinear pose and body shape estimation of dressed subjects from image sets

Nils Hasler; Hanno Ackermann; Bodo Rosenhahn; Thorsten Thormählen; Hans-Peter Seidel

In this paper we propose a multilinear model of human pose and body shape which is estimated from a database of registered 3D body scans in different poses. The model is generated by factorizing the measurements into pose and shape dependent components. By combining it with an ICP based registration method, we are able to estimate pose and body shape of dressed subjects from single images. If several images of the subject are available, shape and poses can be optimized simultaneously for all input images. Additionally, while estimating pose and shape, we use the model as a virtual calibration pattern and also recover the parameters of the perspective camera model the images were created with.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

3D Reconstruction of Human Motion from Monocular Image Sequences

Bastian Wandt; Hanno Ackermann; Bodo Rosenhahn

This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.


IEICE Transactions on Information and Systems | 2007

Uncalibrated Factorization Using a Variable Symmetric Affine Camera

Kenichi Kanatani; Yasuyuki Sugaya; Hanno Ackermann

In order to reconstruct 3-D Euclidean shape by the Tomasi-Kanade factorization, one needs to specify an affine camera model such as orthographic, weak perspective, and paraperspective. We present a new method that does not require any such specific models. We show that a minimal requirement for an affine camera to mimic perspective projection leads to a unique camera model, called symmetric affine camera, which has two free functions. We determine their values from input images by linear computation and demonstrate by experiments that an appropriate camera model is automatically selected.


european conference on computer vision | 2014

Clustering with Hypergraphs: The Case for Large Hyperedges

Tat-Jun Chin; Hanno Ackermann; David Suter

The extension of conventional clustering to hypergraph clustering, which involves higher order similarities instead of pairwise similarities, is increasingly gaining attention in computer vision. This is due to the fact that many clustering problems require an affinity measure that must involve a subset of data of size more than two. In the context of hypergraph clustering, the calculation of such higher order similarities on data subsets gives rise to hyperedges. Almost all previous work on hypergraph clustering in computer vision, however, has considered the smallest possible hyperedge size, due to a lack of study into the potential benefits of large hyperedges and effective algorithms to generate them. In this paper, we show that large hyperedges are better from both a theoretical and an empirical standpoint. We then propose a novel guided sampling strategy for large hyperedges, based on the concept of random cluster models. Our method can generate large pure hyperedges that significantly improve grouping accuracy without exponential increases in sampling costs. We demonstrate the efficacy of our technique on various higher-order grouping problems. In particular, we show that our approach improves the accuracy and efficiency of motion segmentation from dense, long-term, trajectories.


Isprs Journal of Photogrammetry and Remote Sensing | 2017

On support relations and semantic scene graphs

Michael Ying Yang; Wentong Liao; Hanno Ackermann; Bodo Rosenhahn

Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.


computer vision and pattern recognition | 2015

3D human motion capture from monocular image sequences

Bastian Wandt; Hanno Ackermann; Bodo Rosenhahn

This paper tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose our novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.


german conference on pattern recognition | 2017

Deep Learning for Vanishing Point Detection Using an Inverse Gnomonic Projection

Florian Kluger; Hanno Ackermann; Michael Ying Yang; Bodo Rosenhahn

We present a novel approach for vanishing point detection from uncalibrated monocular images. In contrast to state-of-the-art, we make no a priori assumptions about the observed scene. Our method is based on a convolutional neural network (CNN) which does not use natural images, but a Gaussian sphere representation arising from an inverse gnomonic projection of lines detected in an image. This allows us to rely on synthetic data for training, eliminating the need for labelled images. Our method achieves competitive performance on three horizon estimation benchmark datasets. We further highlight some additional use cases for which our vanishing point detection algorithm can be used.


computer vision and pattern recognition | 2009

Trajectory reconstruction for affine structure-from-motion by global and local constraints

Hanno Ackermann; Bodo Rosenhahn

The problem of reconstructing a 3D scene from a moving camera can be solved by means of the so-called Factorization method. It directly computes a global solution without the need to merge several partial reconstructions. However, if the trajectories are not complete, i.e. not every feature point could be observed in all the images, this method cannot be used. We use a Factorization-style algorithm for recovering the unobserved feature positions in a non-incremental way. This method uniformly utilizes all data and finds a global solution without any need of sequential or hierarchical merging. Two contributions are made in this work: Firstly, partially known trajectories are completed by minimizing the distance between the subspace and the trajectory within an affine subspace associated with the trajectory. This amounts to imposing a global constraint on the data. Secondly, we propose to further include local constraints derived from epipolar geometry into the estimation. It is shown how to simultaneously optimize both constraints. By using simulated and real image sequences we show the improvements achieved with our algorithm.


ieee international conference on automatic face gesture recognition | 2017

Apathy Is the Root of All Expressions

Stella GraBhof; Hanno Ackermann; Sami S. Brandt; Jörn Ostermann

In this paper, we present a new statistical model for human faces. Our approach is built upon a tensor factorisation model that allows controlled estimation, morphing and transfer of new facial shapes and expressions. We propose a direct parametrisation and regularisation for person and expression related terms so that the training database is well utilised. In contrast to existing works we are the first to reveal that the expression subspace is star shaped. This stems from the fact that increasing the strength of an expression approximately forms a linear trajectory in the expression subspace, and all these linear trajectories intersect in a single point which corresponds to the point of no expression or the point of apathy. After centring our analysis to this point, we then demonstrate how the dimensionality of the expression subspace can be further reduced by projection pursuit with the help of the fourth-order moment tensor. The results show that our method is able to achieve convincing separation of the person specific and expression subspaces as well as flexible, natural modelling of facial expressions for wide variety of human faces. By the proposed approach, one can morph between different persons and different expressions even if they do not exist in the database. In contrast to the state-of-the-art, the morphing works without causing strong deformations. In the application of expression classification, the results are also better.


RobVis'08 Proceedings of the 2nd international conference on Robot vision | 2008

Iterative low complexity factorization for projective reconstruction

Hanno Ackermann; Kenichi Kanatani

We present a highly efficient method for estimating the structure and motion from image sequences taken by uncalibrated cameras. The basic principle is to do projective reconstruction first followed by Euclidean upgrading. However, the projective reconstruction step dominates the total computational time, because we need to solve eigenproblems of matrices whose size depends on the number of frames or feature points. In this paper, we present a new algorithm that yields the same solution using only matrices of constant size irrespective of the number of frames or points. We demonstrate the superior performance of our algorithm, using synthetic and real video images.

Collaboration


Dive into the Hanno Ackermann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuyuki Sugaya

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sami S. Brandt

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weiyao Lin

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

David Suter

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge