Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Todd E. Zickler is active.

Publication


Featured researches published by Todd E. Zickler.


computer vision and pattern recognition | 2008

Autotagging Facebook: Social network context improves photo annotation

Zak Stone; Todd E. Zickler; Trevor Darrell

Most personal photos that are shared online are embedded in some form of social network, and these social networks are a potent source of contextual information that can be leveraged for automatic image understanding. In this paper, we investigate the utility of social network context for the task of automatic face recognition in personal photographs. We combine face recognition scores with social context in a conditional random field (CRF) model and apply this model to label faces in photos from the popular online social network Facebook, which is now the top photo-sharing site on the Web with billions of photos in total. We demonstrate that our simple method of enhancing face recognition with social network context substantially increases recognition performance beyond that of a baseline face recognition system.


computer vision and pattern recognition | 2005

Beyond Lambert: reconstructing specular surfaces using color

Satya P. Mallick; Todd E. Zickler; David J. Kriegman; Peter N. Belhumeur

We present a photometric stereo method for non-diffuse materials that does not require an explicit reflectance model or reference object. By computing a data-dependent rotation of RGB color space, we show that the specular reflection effects can be separated from the much simpler, diffuse (approximately Lambertian) reflection effects for surfaces that can be modeled with dichromatic reflectance. Images in this transformed color space are used to obtain photometric reconstructions that are independent of the specular reflectance. In contrast to other methods for highlight removal based on dichromatic color separation (e.g., color histogram analysis and/or polarization), we do not explicitly recover the specular and diffuse components of an image. Instead, we simply find a transformation of color space that yields more direct access to shape information. The method is purely local and is able to handle surfaces with arbitrary texture.


computer vision and pattern recognition | 2011

Statistics of real-world hyperspectral images

Ayan Chakrabarti; Todd E. Zickler

Hyperspectral images provide higher spectral resolution than typical RGB images by including per-pixel irradiance measurements in a number of narrow bands of wavelength in the visible spectrum. The additional spectral resolution may be useful for many visual tasks, including segmentation, recognition, and relighting. Vision systems that seek to capture and exploit hyperspectral data should benefit from statistical models of natural hyperspectral images, but at present, relatively little is known about their structure. Using a new collection of fifty hyperspectral images of indoor and outdoor scenes, we derive an optimized “spatio-spectral basis” for representing hyperspectral image patches, and explore statistical models for the coefficients in this basis.


computer vision and pattern recognition | 2010

Analyzing spatially-varying blur

Ayan Chakrabarti; Todd E. Zickler; William T. Freeman

Blur is caused by a pixel receiving light from multiple scene points, and in many cases, such as object motion, the induced blur varies spatially across the image plane. However, the seemingly straight-forward task of estimating spatially-varying blur from a single image has proved hard to accomplish reliably. This work considers such blur and makes two contributions: a local blur cue that measures the likelihood of a small neighborhood being blurred by a candidate blur kernel; and an algorithm that, given an image, simultaneously selects a motion blur kernel and segments the region that it affects. The methods are shown to perform well on a diversity of images.


computer vision and pattern recognition | 2008

Photometric stereo with non-parametric and spatially-varying reflectance

Neil Gordon Alldrin; Todd E. Zickler; David J. Kriegman

We present a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images. The distinguishing feature of our approach is its generality; it does not rely on a specific parametric reflectance model and is therefore purely ldquodata-drivenrdquo. This is achieved by employing novel bi-variate approximations of isotropic reflectance functions. By combining this new approximation with recent developments in photometric stereo, we are able to simultaneously estimate an independent surface normal at each point, a global set of non-parametric ldquobasis materialrdquo BRDFs, and per-point material weights. Our experimental results validate the approach and demonstrate the utility of bi-variate reflectance functions for general non-parametric appearance capture.


Medical Image Analysis | 2007

GPU Based Real-time Instrument Tracking with Three Dimensional Ultrasound

Paul M. Novotny; Jeffrey A. Stoll; Nikolay V. Vasilyev; Pedro J. del Nido; Pierre E. Dupont; Todd E. Zickler; Robert D. Howe

Real-time 3D ultrasound can enable new image-guided surgical procedures, but high data rates prohibit the use of traditional tracking techniques. We present a new method based on the modified Radon transform that identifies the axis of instrument shafts as bright patterns in planar projections. Instrument rotation and tip location are then determined using fiducial markers. These techniques are amenable to rapid execution on the current generation of personal computer graphics processor units (GPU). Our GPU implementation detected a surgical instrument in 31 ms, sufficient for real-time tracking at the 26 volumes per second rate of the ultrasound machine. A water tank experiment found instrument tip position errors of less than 0.2 mm, and an in vivo study tracked an instrument inside a beating porcine heart. The tracking results showed good correspondence to the actual movements of the instrument.


international conference on computer vision | 2001

Beyond Lambert: reconstructing surfaces with arbitrary BRDFs

Sebastian Magda; David J. Kriegman; Todd E. Zickler; Peter N. Belhumeur

We address an open and hitherto neglected problem in computer vision, how to reconstruct the geometry of objects with arbitrary and possibly anisotropic bidirectional reflectance distribution functions (BRDFs). Present reconstruction techniques, whether stereo vision, structure from motion, laser range finding, etc. make explicit or implicit assumptions about the BRDF. Here, we introduce two methods that were developed by re-examining the underlying image formation process; the methods make no assumptions about the objects shape, the presence or absence of shadowing, or the nature of the BRDF which may vary over the surface. The first method takes advantage of Helmholtz reciprocity, while the second method exploits the fact that the radiance along a ray of light is constant. In particular, the first method uses stereo pairs of images in which point light sources are co-located at the centers of projection of the stereo cameras. The second method is based on double covering a scenes incident light field; the depths of surface points are estimated using a large collection of images in which the viewpoint remains fixed and a point light source illuminates the object. Results from our implementations lend empirical support to both techniques.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Color Constancy with Spatio-Spectral Statistics

Ayan Chakrabarti; Keigo Hirakawa; Todd E. Zickler

We introduce an efficient maximum likelihood approach for one part of the color constancy problem: removing from an image the color cast caused by the spectral distribution of the dominating scene illuminant. We do this by developing a statistical model for the spatial distribution of colors in white balanced images (i.e., those that have no color cast), and then using this model to infer illumination parameters as those being most likely under our model. The key observation is that by applying spatial band-pass filters to color images one unveils color distributions that are unimodal, symmetric, and well represented by a simple parametric form. Once these distributions are fit to training data, they enable efficient maximum likelihood estimation of the dominant illuminant in a new image, and they can be combined with statistical prior information about the illuminant in a very natural manner. Experimental evaluation on standard data sets suggests that the approach performs well.


international conference on computer graphics and interactive techniques | 2010

A coaxial optical scanner for synchronous acquisition of 3D geometry and surface reflectance

Michael Holroyd; Jason Lawrence; Todd E. Zickler

We present a novel optical setup and processing pipeline for measuring the 3D geometry and spatially-varying surface reflectance of physical objects. Central to our design is a digital camera and a high frequency spatially-modulated light source aligned to share a common focal point and optical axis. Pairs of such devices allow capturing a sequence of images from which precise measurements of geometry and reflectance can be recovered. Our approach is enabled by two technical contributions: a new active multiview stereo algorithm and an analysis of light descattering that has important implications for image-based reflectometry. We show that the geometry measured by our scanner is accurate to within 50 microns at a resolution of roughly 200 microns and that the reflectance agrees with reference data to within 5.5%. Additionally, we present an image relighting application and show renderings that agree very well with reference images at light and view positions far from those that were initially measured.


Proceedings of the IEEE | 2010

Toward Large-Scale Face Recognition Using Social Network Context

Zak Stone; Todd E. Zickler; Trevor Darrell

Personal photographs are being captured in digital form at an accelerating rate, and our computational tools for searching, browsing, and sharing these photos are struggling to keep pace. One promising approach is automatic face recognition, which would allow photos to be organized by the identities of the individuals they contain. However, achieving accurate recognition at the scale of the Web requires discriminating among hundreds of millions of individuals and would seem to be a daunting task. This paper argues that social network context may be the key for large-scale face recognition to succeed. Many personal photographs are shared on the Web through online social network sites, and we can leverage the resources and structure of such social networks to improve face recognition rates on the images shared. Drawing upon real photo collections from volunteers who are members of a popular online social network, we asses the availability of resources to improve face recognition and discuss techniques for applying these resources.

Collaboration


Dive into the Todd E. Zickler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ohad Ben-Shahar

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge