Kevin Dale
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin Dale.
international conference on computer graphics and interactive techniques | 2008
Toshiya Hachisuka; Wojciech Jarosz; Richard Peter Weistroffer; Kevin Dale; Greg Humphreys; Matthias Zwicker; Henrik Wann Jensen
We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for existing image based adaptive sampling techniques as they operate on pixels, which are possibly noisy results of a Monte Carlo ray tracing process. Our sampling technique operates on samples in the multidimensional space given by the rendering equation and as a consequence the value of each sample is noise-free. Our algorithm consists of two passes. In the first pass we adaptively generate samples in the multidimensional space, focusing on regions where the local contrast between samples is high. In the second pass we reconstruct the image by integrating the multidimensional function along all but the image dimensions. We perform a high quality anisotropic reconstruction by determining the extent of each sample in the multidimensional space using a structure tensor. We demonstrate our method on scenes with a 3 to 5 dimensional space, including soft shadows, motion blur, and depth of field. The results show that our method uses fewer samples than Mittchells adaptive sampling technique while producing images with less noise.
international conference on computer graphics and interactive techniques | 2011
Kevin Dale; Kalyan Sunkavalli; Micah K. Johnson; Daniel Vlasic; Wojciech Matusik; Hanspeter Pfister
We present a method for replacing facial performances in video. Our approach accounts for differences in identity, visual appearance, speech, and timing between source and target videos. Unlike prior work, it does not require substantial manual operation or complex acquisition hardware, only single-camera video. We use a 3D multilinear model to track the facial performance in both videos. Using the corresponding 3D geometry, we warp the source to the target face and retime the source to match the target performance. We then compute an optimal seam through the video volume that maintains temporal consistency in the final composite. We showcase the use of our method on a variety of examples and present the result of a user study that suggests our results are difficult to distinguish from real video footage.
international conference on computer vision | 2009
Kevin Dale; Micah K. Johnson; Kalyan Sunkavalli; Wojciech Matusik; Hanspeter Pfister
We present an image restoration method that leverages a large database of images gathered from the web. Given an input image, we execute an efficient visual search to find the closest images in the database; these images define the inputs visual context. We use the visual context as an image-specific prior and show its value in a variety of image restoration operations, including white balance correction, exposure correction, and contrast enhancement. We evaluate our approach using a database of 1 million images downloaded from Flickr and demonstrate the effect of database size on performance. Our results show that priors based on the visual context consistently out-perform generic or even domain-specific priors for these operations.
IEEE Transactions on Visualization and Computer Graphics | 2011
Micah K. Johnson; Kevin Dale; Shai Avidan; Hanspeter Pfister; William T. Freeman; Wojciech Matusik
Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.
computer vision and pattern recognition | 2012
Kevin Dale; Eli Shechtman; Shai Avidan; Hanspeter Pfister
We propose a method for browsing multiple videos with a common theme, such as the result of a search query on a video sharing website, or videos of an event covered by multiple cameras. Given the collection of videos we first align each video with all others. This pairwise video alignment forms the basis of a novel browsing interface, termed the Browsing Companion. It is used to play a primary video and, in addition as thumbnails, other video clips that are temporally synchronized with it. The user can, at any time, click on one of the thumbnails to make it the primary. We also show that video alignment can be used for other applications such as automatic highlight detection and multi-video summarization.
arXiv: Instrumentation and Methods for Astrophysics | 2009
S. M. Ord; Hanspeter Pfister; L. J. Greenhill; R.G. Edgar; Kevin Dale; R. B. Wayth; D. A. Mitchell
Archive | 2009
S. M. Ord; L. J. Greenhill; R. B. Wayth; Daniel A. J. Mitchell; Kevin Dale; Hanspeter Pfister; R. G. Edgar
Archive | 2007
R. B. Wayth; Kevin Dale; L. J. Greenhill; Daniel A. J. Mitchell; S. M. Ord; Hanspeter Pfister
Archive | 2012
Greg Humphreys; David Luebke; Kevin Dale; Ewen Cheslack-Postava
IEEE | 2011
Micah K. Johnson; Kevin Dale; Shai Avidan; Hanspeter Pfister; William T. Freeman; Wojciech Matusik