Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Horst W. Haussecker is active.

Publication


Featured researches published by Horst W. Haussecker.


computer vision and pattern recognition | 2007

Detailed Human Shape and Pose from Images

Alexandru O. Balan; Leonid Sigal; Michael J. Black; James Davis; Horst W. Haussecker

Much of the research on video-based human motion capture assumes the body shape is known a priori and is represented coarsely (e.g. using cylinders or superquadrics to model limbs). These body models stand in sharp contrast to the richly detailed 3D body models used by the graphics community. Here we propose a method for recovering such models directly from images. Specifically, we represent the body using a recently proposed triangulated mesh model called SCAPE which employs a low-dimensional, but detailed, parametric model of shape and pose-dependent deformations that is learned from a database of range scans of human bodies. Previous work showed that the parameters of the SCAPE model could be estimated from marker-based motion capture data. Here we go further to estimate the parameters directly from image data. We define a cost function between image observations and a hypothesized mesh and formulate the problem as optimization over the body shape and pose parameters using stochastic search. Our results show that such rich generative models enable the automatic recovery of detailed human shape and pose from images.


International Journal of Computer Vision | 2012

Loose-limbed People: Estimating 3D Human Pose and Motion Using Non-parametric Belief Propagation

Leonid Sigal; Michael Isard; Horst W. Haussecker; Michael J. Black

We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from “bottom-up” visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.


conference on multimedia modeling | 2010

An augmented reality tourist guide on your mobile devices

Maha El Choubassi; Oscar Nestares; Yi Wu; Igor Kozintsev; Horst W. Haussecker

We present an augmented reality tourist guide on mobile devices. Many of latest mobile devices contain cameras, location, orientation and motion sensors. We demonstrate how these devices can be used to bring tourism information to users in a much more immersive manner than traditional text or maps. Our system uses a combination of camera, location and orientation sensors to augment live camera view on a device with the available information about the objects in the view. The augmenting information is obtained by matching a camera image to images in a database on a server that have geotags in the vicinity of the user location. We use a subset of geotagged English Wikipedia pages as the main source of images and augmenting text information. At the time of publication our database contained 50 K pages with more than 150 K images linked to them. A combination of motion estimation algorithms and orientation sensors is used to track objects of interest in the live camera view and place augmented information on top of them.


ieee international symposium on workload characterization | 2009

Performance characterization and optimization of mobile augmented reality on handheld platforms

Sadagopan Srinivasan; Zhen Fang; Ravi R. Iyer; Steven Zhang; Mike Espig; Don Newell; Daniel M. Cermak; Yi Wu; Igor Kozintsev; Horst W. Haussecker

The introduction of low power general purpose processors (like the Intel® Atom™ processor) expands the capability of handheld and mobile internet devices (MIDs) to include compelling visual computing applications. One rapidly emerging visual computing usage model is known as mobile augmented reality (MAR). In the MAR usage model, the user is able to point the handheld camera to an object (like a wine bottle) or a set of objects (like an outdoor scene of buildings or monuments) and the device automatically recognizes and displays information regarding the object(s). Achieving this on the handheld requires significant compute processing resulting in a response time in the order of several seconds. In this paper, we analyze a MAR workload and identify the primary hotspot functions that incur a large fraction of the overall response time. We also present a detailed architectural characterization of the hotspot functions in terms of CPI, MPI, etc. We then implement and analyze the benefits of several software optimizations: (a) vectorization, (b) multi-threading, (c) cache conflict avoidance and (d) miscellaneous code optimizations that reduce the number of computations. We show that a 3X performance improvement in execution time can be achieved by implementing these optimizations. Overall, we believe our analysis provides a detailed understanding of the processing for a new domain of visual computing workloads (i.e. MAR) running on low power handheld compute platforms.


international symposium on mixed and augmented reality | 2010

Video stabilization to a global 3D frame of reference by fusing orientation sensor and image alignment data

Oscar Nestares; Yoram Gat; Horst W. Haussecker; Igor Kozintsev

Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.


international conference on multimedia and expo | 2009

Wikireality: Augmenting reality with community driven websites

Douglas Gray; Igor Kozintsev; Yi Wu; Horst W. Haussecker

We present a system for making community driven websites easily accessible from the latest mobile devices. Many of these new devices contain an ensemble of sensors such as cameras, GPS and inertial sensors. We demonstrate how these new sensors can be used to bring the information contained in sites like Wikipedia to users in a much more immersive manner than text or maps. We have collected a large database of images and articles from Wikipedia and show how a user can query this database by simply snapping a photo. Our system uses the location sensors to assist with image matching and the inertial sensors to provide a unique and intuitive user interface for browsing results.


24th Annual BACUS Symposium on Photomask Technology | 2004

Image-based metrology software for analysis of features on masks and wafers

Saghir Munir; Daniel J. Bald; Vikram Tolani; Horst W. Haussecker

Tebaldi is a software tool developed at Intel Mask Operation (IMO) for quantitatively analyzing patterns in 2D. Its initial scope was to analyze aerial images taken with a microscope. However the software has recently been enhanced to support aerial images obtained through simulation, bitmap, jpeg and tiff files saved from the mask inspection systems and the scanning electron microscope (SEM). This article primarily focuses on the SEM module of the software. Tebaldi supports simulated aerial images generated through IMO’s simulation based defect disposition system. This allows engineers to directly correlate 2D structures in an experimental aerial image, with those in a simulated image. To analyze SEM images, the software features scaling, alignment and calibration functions. Several linear and non-linear filtration techniques to reduce noise and charging exist. Custom convolution kernels can be user defined. Ability to segment features and extract contours also exist. Further, these contours can be overlayed and shortest distances between corresponding points can be computed in a user friendly manner with a high degree of confidence. Tebaldi is currently used in production to disposition defects in repaired sites on masks shipped from IMO as well as to compare SEM images to determine the pattern fidelity across mask writers and processes within IMO.


Imaging and Applied Optics 2015 (2015), paper CM4E.1 | 2015

Title to be Announced

Horst W. Haussecker

Recent developments of imaging systems have created rich immersive data sets that allow for new forms of media. We will explore new computational imaging capabilities in the area of multi-camera systems and interactive visual experiences.


Proceedings of SPIE | 2013

Efficient intensity-based camera pose estimation in presence of depth

Maha El Choubassi; Oscar Nestares; Yi Wu; Igor Kozintsev; Horst W. Haussecker

The widespread success of Kinect enables users to acquire both image and depth information with satisfying accuracy at relatively low cost. We leverage the Kinect output to efficiently and accurately estimate the camera pose in presence of rotation, translation, or both. The applications of our algorithm are vast ranging from camera tracking, to 3D points clouds registration, and video stabilization. The state-of-the-art approach uses point correspondences for estimating the pose. More explicitly, it extracts point features from images, e.g., SURF or SIFT, and builds their descriptors, and matches features from different images to obtain point correspondences. However, while features-based approaches are widely used, they perform poorly in scenes lacking texture due to scarcity of features or in scenes with repetitive structure due to false correspondences. Our algorithm is intensity-based and requires neither point features’ extraction, nor descriptors’ generation/matching. Due to absence of depth, the intensity-based approach alone cannot handle camera translation. With Kinect capturing both image and depth frames, we extend the intensity-based algorithm to estimate the camera pose in case of both 3D rotation and translation. The results are quite promising.


Archive | 2004

Computing a higher resolution image from multiple lower resolution images using model-based, robust bayesian estimation

Oscar Nestares; Horst W. Haussecker; Scott M. Ettinger

Collaboration


Dive into the Horst W. Haussecker's collaboration.

Researchain Logo
Decentralizing Knowledge