Ulrich Weidenbacher
University of Ulm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ulrich Weidenbacher.
PLOS ONE | 2009
Ulrich Weidenbacher; Heiko Neumann
Background Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces. Methodology/Principal Findings In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches. Conclusions/Significance A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals which treat localized junction configurations as 2D image features, we link them to mechanisms of apparent surface segregation. As a consequence, we demonstrate how junctions can change their perceptual representation depending on the scene context and the spatial configuration of boundary fragments.
perception and interactive technologies | 2006
Ulrich Weidenbacher; Georg Layher; Pierre Bayerl; Heiko Neumann
In this contribution we extend existing methods for head pose estimation and investigate the use of local image phase for gaze detection. Moreover we describe how a small database of face images with given ground truth for head pose and gaze direction was acquired. With this database we compare two different computational approaches for extracting the head pose. We demonstrate that a simple implementation of the proposed methods without extensive training sessions or calibration is sufficient to accurately detect the head pose for human-computer interaction. Furthermore, we propose how eye gaze can be extracted based on the outcome of local filter responses and the detected head pose. In all, we present a framework where different approaches are combined to a single system for extracting information about the attentional state of a person.
tests and proofs | 2006
Ulrich Weidenbacher; Pierre Bayerl; Heiko Neumann; Roland W. Fleming
Many materials including water, plastic, and metal have specular surface characteristics. Specular reflections have commonly been considered a nuisance for the recovery of object shape. However, the way that reflections are distorted across the surface depends crucially on 3D curvature, suggesting that they could, in fact, be a useful source of information. Indeed, observers can have a vivid impression of, 3D shape when an object is perfectly mirrored (i.e., the image contains nothing but specular reflections). This leads to the question what are the underlying mechanisms of our visual system to extract this 3D shape information from a perfectly mirrored object. In this paper we propose a biologically motivated recurrent model for the extraction of visual features relevant for the perception of 3D shape information from images of mirrored objects. We qualitatively and quantitatively analyze the results of computational model simulations and show that bidirectional recurrent information processing leads to better results than pure feedforward processing. Furthermore, we utilize the model output to create a rough nonphotorealistic sketch representation of a mirrored object, which emphasizes image features that are mandatory for 3D shape perception (e.g., occluding contour and regions of high curvature). Moreover, this sketch illustrates that the model generates a representation of object features independent of the surrounding scene reflected in the mirrored object.
applied perception in graphics and visualization | 2005
Ulrich Weidenbacher; Pierre Bayerl; Roland W. Fleming; Heiko Neumann
Many materials including water, plastic and metal have specular surface characteristics. Specular reflections have commonly been considered a nuisance for the recovery of object shape. However, the way that reflections are distorted across the surface depends crucially on 3D curvature, suggesting that they could in fact be a useful source of information. Indeed, observers can have a vivid impression of 3D shape when an object is perfectly mirrored (i.e. the image contains nothing but specular reflections). This leads to the question what are the underlying mechanisms of our visual system to extract this 3D shape information from a perfectly mirrored object. In this paper we propose a biologically motivated recurrent model for the extraction of visual features relevant for the perception of 3D shape information from images of mirrored objects. We analyze qualitatively and quantitatively the results of computational model simulations and show that bidirectional recurrent information processing leads to better results then pure feedforward processing. Furthermore we utilize the model output to create a rough non-photorealistic sketch representation of a mirrored object, which emphasizes image features that are mandatory for 3D shape perception (e.g. occluding contour, regions of high curvature). Moreover, this sketch illustrates that the model generates a representation of object features independent of the surrounding scene reflected in the mirrored object.
Intelligent Environments, 2007. IE 07. 3rd IET International Conference on | 2007
Ulrich Weidenbacher; Georg Layher; P.-M. Strauss; Heiko Neumann
language resources and evaluation | 2006
Petra-Maria Strauß; Holger Hoffmann; Wolfgang Minker; Heiko Neumann; Günther Palm; Stefan Scherer; Friedhelm Schwenker; Harald C. Traue; Welf Walter; Ulrich Weidenbacher
language resources and evaluation | 2008
Petra-Maria Strauß; Holger Hoffmann; Wolfgang Minker; Heiko Neumann; Günther Palm; Stefan Scherer; Harald C. Traue; Ulrich Weidenbacher
Journal of Vision | 2010
Ulrich Weidenbacher; Heiko Neumann
Journal of Vision | 2010
Ulrich Weidenbacher; Pierre Bayerl; Heiko Neumann
8th Tübingen Perception Conference (TWK 2005) | 2005
Ulrich Weidenbacher; Bayerl P, Fleming, R; Heiko Neumann