C Curio
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by C Curio.
international conference on intelligent transportation systems | 1999
C Curio; Johann Edelbrunner; Thomas Kalinke; Christos Tzomakas; W. von Seelen
In recent years a lot of methods providing the ability to recognize rigid obstacles-like sedans and trucks have been developed. These methods mainly provide driving relevant information to the driver. They are able to cope reliably with scenarios on motor-ways. Nevertheless, not much attention has been given to image processing approaches to increase safety of pedestrians in traffic environments. In this paper a method for detection, tracking, and final classification of pedestrians crossing the moving observers trajectory is suggested. Herein a combination of data and model driven approaches is realized. The initial detection process is based on a texture analysis and a model-based grouping of most likely geometric features belonging to a pedestrian on intensity images. Additionally, motion patterns of limb movements are analyzed to determine initial object hypotheses. For this tracking of the quasi-rigid part of the body is performed by different trackers that have been successfully employed for tracking of sedans, trucks, motor-bikes, and pedestrians. The final classification is obtained by a temporal analysis of the walking process.
IEEE Transactions on Industrial Electronics | 2003
T. Bucher; C Curio; Johann Edelbrunner; Christian Igel; D. Kastrup; Iris Leefken; Gesa Lorenz; Axel Steinhage; W. von Seelen
Since the potential of soft computing for driver assistance systems has been recognized, much effort has been spent in the development of appropriate techniques for robust lane detection, object classification, tracking, and representation of task relevant objects. For such systems in order to be able to perform their tasks the environment must be sensed by one or more sensors. Usually a complex processing, fusion, and interpretation of the sensor data is required and imposes a modular architecture for the overall system. In this paper, we present specific approaches considering the main components of such systems. We concentrate on image processing as the main source of relevant object information, representation and fusion of data that might arise from different sensors, and behavior planning and generation as a basis for autonomous driving. Within our system components most paradigms of soft computing are employed; in this article we focus on Kalman filtering for sensor fusion, neural field dynamics for behavior generation, and evolutionary algorithms for optimization of parts of the system.
applied perception in graphics and visualization | 2006
C Curio; Martin Breidt; Mario Kleiner; Quoc C. Vuong; Martin A. Giese; Hh Bülthoff
We present a system for realistic facial animation that decomposes facial motion capture data into semantically meaningful motion channels based on the Facial Action Coding System. A captured performance is retargeted onto a morphable 3D face model based on a semantic correspondence between motion capture and 3D scan data. The resulting facial animation reveals a high level of realism by combining the high spatial resolution of a 3D scanner with the high temporal accuracy of motion capture data that accounts for subtle facial movements with sparse measurements.Such an animation system allows us to systematically investigate human perception of moving faces. It offers control over many aspects of the appearance of a dynamic face, while utilizing as much measured data as possible to avoid artistic biases. Using our animation system, we report results of an experiment that investigates the perceived naturalness of facial motion in a preference task. For expressions with small amounts of head motion, we find a benefit for our part-based generative animation system over an example-based approach that deforms the whole face at once.
virtual reality software and technology | 2008
David Engel; C Curio; Lili Tcheang; Betty J. Mohler; Hh Bülthoff
Experience indicates that the sense of presence in a virtual environment is enhanced when the participants are able to actively move through it. When exploring a virtual world by walking, the size of the model is usually limited by the size of the available tracking space. A promising way to overcome these limitations are motion compression techniques, which decouple the position in the real and virtual world by introducing imperceptible visual-proprioceptive conflicts. Such techniques usually precalculate the redirection factors, greatly reducing their robustness. We propose a novel way to determine the instantaneous rotational gains using a controller based on an optimization problem. We present a psychophysical study that measures the sensitivity of visual-proprioceptive conflicts during walking and use this to calibrate a real-time controller. We show the validity of our approach by allowing users to walk through virtual environments vastly larger than the tracking space.
ieee international conference on automatic face gesture recognition | 2011
Martin Breidt; Heinrich H. Biilthoff; C Curio
Rich face models already have a large impact on the fields of computer vision, perception research, as well as computer graphics and animation. Attributes such as descriptiveness, semantics, and intuitive control are desirable properties but hard to achieve. Towards the goal of building such high-quality face models, we present a 3D model-based analysis-by-synthesis approach that is able to parameterize 3D facial surfaces, and that can estimate the state of semantically meaningful components, even from noisy depth data such as that produced by Time-of-Flight (ToF) cameras or devices such as Microsoft Kinect. At the core, we present a specialized 3D morphable model (3DMM) for facial expression analysis and synthesis. In contrast to many other models, our model is derived from a large corpus of localized facial deformations that were recorded as 3D scans from multiple identities. This allows us to analyze unstructured dynamic 3D scan data using a modified Iterative Closest Point model fitting process, followed by a constrained Action Unit model regression, resulting in semantically meaningful facial deformation time courses. We demonstrate the generative capabilities of our 3DMMs for facial surface reconstruction on high and low quality surface data from a ToF camera. The analysis of simultaneous recordings of facial motion using passive stereo and noisy Time-of-Flight camera shows good agreement of the recovered facial semantics.
PLOS ONE | 2014
Stephan de la Rosa; Stephan Streuber; Martin A. Giese; Hh Bülthoff; C Curio
The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action.
workshop on applications of computer vision | 2005
C Curio; Martin A. Giese
Many existing systems for human body tracking are based on dynamic model-based tracking that is driven by local image features. Alternatively, within a view-based approach, tracking of humans can be accomplished by the learning-based recognition of characteristic body postures which define the spatial positions of interesting points on the human body. Recognition of body postures can be based on simple image descriptors, like the moments of body silhouettes. We present a system that combines these two approaches within a common closed-loop architecture. Central characteristics of our system are: (1) Mapping of image features into a posture space with reduced dimensionality by learning one-to-many mappings from training data by a set of parallel SVM regressions. (2) Selection of the relevant regression hypotheses by a competitive particle filter that is defined over a low-dimensional hidden state space. (3) The recognized postures are used as priors to initialize and support classical model-based tracking using a flexible articulated 2D model that is driven by local image features using a vector field approach. We present pose tracking and reconstruction results based on a combination of view-based and model-based tracking. Increased robustness and improved generalization properties are achieved even for small amounts of training data.
international conference on image processing | 2000
W. von Seelen; C Curio; J. Gayko; Uwe Handmann; Thomas Kalinke
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems arose in the past years. Fully or partly autonomously guided vehicles, particularly for road traffic, pose high demands on the development of reliable algorithms. Principal problems are caused by having a moving observer in predominantly natural environments. At the Institut fur Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a solution for a driver assistance system. We concentrate on the aspects of video-based scene analysis and organization of behavior.
Journal of Vision | 2013
S de la Rosa; Martin A. Giese; Hh Bülthoff; C Curio
Probing emotional facial expression recognition with the adaptation paradigm is one way to investigate the processes underlying emotional face recognition. Previous research suggests that these processes are tuned to dynamic facial information (facial movement). Here we examined the tuning of processes involved in the recognition of emotional facial expressions to different sources of facial movement information. Specifically we investigated the effect of the availability of rigid head movement and intrinsic facial movements (e.g., movement of facial features) on the size of the emotional facial expression adaptation effect. Using a three-dimensional (3D) morphable model that allowed the manipulation of the availability of each of the two factors (intrinsic facial movement, head movement) individually, we examined emotional facial expression adaptation with happy and disgusted faces. Our results show that intrinsic facial movement is necessary for the emergence of an emotional facial expression adaptation effect with dynamic adaptors. The presence of rigid head motion modulates the emotional facial expression adaptation effect only in the presence of intrinsic facial motion. In a second experiment we show these adaptation effects are difficult to explain by merely the perceived intensity and clarity (uniqueness) of the adaptor expressions. Together these results suggest that processes encoding facial expressions are differently tuned to different sources of facial movements.
joint pattern recognition symposium | 2009
Christian Walder; Martin Breidt; Hh Bülthoff; Bernhard Schölkopf; C Curio
We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects are presented.