Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kalyan Sunkavalli is active.

Publication


Featured researches published by Kalyan Sunkavalli.


international conference on computer graphics and interactive techniques | 2011

Video face replacement

Kevin Dale; Kalyan Sunkavalli; Micah K. Johnson; Daniel Vlasic; Wojciech Matusik; Hanspeter Pfister

We present a method for replacing facial performances in video. Our approach accounts for differences in identity, visual appearance, speech, and timing between source and target videos. Unlike prior work, it does not require substantial manual operation or complex acquisition hardware, only single-camera video. We use a 3D multilinear model to track the facial performance in both videos. Using the corresponding 3D geometry, we warp the source to the target face and retime the source to match the target performance. We then compute an optimal seam through the video volume that maintains temporal consistency in the final composite. We showcase the use of our method on a variety of examples and present the result of a user study that suggests our results are difficult to distinguish from real video footage.


international conference on computer graphics and interactive techniques | 2010

Multi-scale image harmonization

Kalyan Sunkavalli; Micah K. Johnson; Wojciech Matusik; Hanspeter Pfister

Traditional image compositing techniques, such as alpha matting and gradient domain compositing, are used to create composites that have plausible boundaries. But when applied to images taken from different sources or shot under different conditions, these techniques can produce unrealistic results. In this work, we present a framework that explicitly matches the visual appearance of images through a process we call image harmonization, before blending them. At the heart of this framework is a multi-scale technique that allows us to transfer the appearance of one image to another. We show that by carefully manipulating the scales of a pyramid decomposition of an image, we can match contrast, texture, noise, and blur, while avoiding image artifacts. The output composite can then be reconstructed from the modified pyramid coefficients while enforcing both alpha-based and seamless boundary constraints. We show how the proposed framework can be used to produce realistic composites with minimal user interaction in a number of different scenarios.


international conference on computer graphics and interactive techniques | 2007

Factored time-lapse video

Kalyan Sunkavalli; Wojciech Matusik; Hanspeter Pfister; Szymon Rusinkiewicz

We describe a method for converting time-lapse photography captured with outdoor cameras into Factored Time-Lapse Video (FTLV): a video in which time appears to move faster (i.e., lapsing) and where data at each pixel has been factored into shadow, illumination, and reflectance components. The factorization allows a user to easily relight the scene, recover a portion of the scene geometry (normals), and to perform advanced image editing operations. Our method is easy to implement, robust, and provides a compact representation with good reconstruction characteristics. We show results using several publicly available time-lapse sequences.


Taxon | 2006

First steps toward an electronic field guide for plants

Gaurav Agarwal; Peter N. Belhumeur; Steven Feiner; David W. Jacobs; W. John Kress; Norman A. Bourg; Nandan Dixit; Haibin Ling; Dhruv Mahajan; Sameer Shirdhonkar; Kalyan Sunkavalli; Sean White

We describe an ongoing project to digitize information about plant specimens and make it available to botanists in the field. This first requires digital images and models, and then effective retrieval and mobile computing mechanisms for accessing this information. We have almost completed a digital archive of the collection of type specimens at the Smithsonian Institution Department of Botany. Using these and additional images, we have also constructed prototype electronic field guides for the flora of Plummers Island. Our guides use a novel computer vision algorithm to compute leaf similarity. This algorithm is integrated into image browsers that assist a user in navigating a large collection of images to identify the species of a new specimen. For example, our systems allow a user to photograph a leaf and use this image to retrieve a set of leaves with similar shapes. We measured the effectiveness of one of these systems with recognition experiments on a large dataset of images, and with user studies of the complete retrieval system. In addition, we describe future directions for acquiring models of more complex, 3D specimens, and for using new methods in wearable computing to interact with data in the 3D environment in which it is acquired.


international conference on computer vision | 2009

Image restoration using online photo collections

Kevin Dale; Micah K. Johnson; Kalyan Sunkavalli; Wojciech Matusik; Hanspeter Pfister

We present an image restoration method that leverages a large database of images gathered from the web. Given an input image, we execute an efficient visual search to find the closest images in the database; these images define the inputs visual context. We use the visual context as an image-specific prior and show its value in a variety of image restoration operations, including white balance correction, exposure correction, and contrast enhancement. We evaluate our approach using a database of 1 million images downloaded from Flickr and demonstrate the effect of database size on performance. Our results show that priors based on the visual context consistently out-perform generic or even domain-specific priors for these operations.


computer vision and pattern recognition | 2008

What do color changes reveal about an outdoor scene

Kalyan Sunkavalli; Fabiano Romeiro; Wojciech Matusik; Todd E. Zickler; Hanspeter Pfister

In an extended image sequence of an outdoor scene, one observes changes in color induced by variations in the spectral composition of daylight. This paper proposes a model for these temporal color changes and explores its use for the analysis of outdoor scenes from time-lapse video data. We show that the time-varying changes in direct sunlight and ambient skylight can be recovered with this model, and that an image sequence can be decomposed into two corresponding components. The decomposition provides access to both radiometric and geometric information about a scene, and we demonstrate how this can be exploited for a variety of visual tasks, including color-constancy, background subtraction, shadow detection, scene reconstruction, and camera geo-location.


international conference on computer graphics and interactive techniques | 2013

Example-based video color grading

Nicolas Bonneel; Kalyan Sunkavalli; Sylvain Paris; Hanspeter Pfister

In most professional cinema productions, the color palette of the movie is painstakingly adjusted by a team of skilled colorists -- through a process referred to as color grading -- to achieve a certain visual look. The time and expertise required to grade a video makes it difficult for amateurs to manipulate the colors of their own video clips. In this work, we present a method that allows a user to transfer the color palette of a model video clip to their own video sequence. We estimate a per-frame color transform that maps the color distributions in the input video sequence to that of the model video clip. Applying this transformation naively leads to artifacts such as bleeding and flickering. Instead, we propose a novel differential-geometry-based scheme that interpolates these transformations in a manner that minimizes their curvature, similarly to curvature flows. In addition, we automatically determine a set of keyframes that best represent this interpolated transformation curve, and can be used subsequently, to manually refine the color grade. We show how our method can successfully transfer color palettes between videos for a range of visual styles and a number of input video clips.


ACM Transactions on Graphics | 2014

Automatic Scene Inference for 3D Object Compositing

Kevin Karsch; Kalyan Sunkavalli; Sunil Hadap; Nathan A. Carr; Hailin Jin; Rafael Fonte; Michael Sittig; David A. Forsyth

We present a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.


international conference on computer graphics and interactive techniques | 2014

Interactive intrinsic video editing

Nicolas Bonneel; Kalyan Sunkavalli; James Tompkin; Deqing Sun; Sylvain Paris; Hanspeter Pfister

Separating a photograph into its reflectance and illumination intrinsic images is a fundamentally ambiguous problem, and state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results. However, these algorithms cannot be easily extended to videos for two reasons: first, näively applying algorithms designed for single images to videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback, and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video sequences into their reflectance and illumination components. Our algorithm uses a hybrid ℓ2ℓp formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and lighting-aware compositing.


european conference on computer vision | 2010

Visibility subspaces: uncalibrated photometric stereo with shadows

Kalyan Sunkavalli; Todd E. Zickler; Hanspeter Pfister

Photometric stereo relies on inverting the image formation process, and doing this accurately requires reasoning about the visibility of light sources with respect to each image point. While simple heuristics for shadow detection suffice in some cases, they are susceptible to error. This paper presents an alternative approach for handling visibility in photometric stereo, one that is suitable for uncalibrated settings where the light directions are not known. A surface imaged under a finite set of light sources can be divided into regions having uniform visibility, and when the surface is Lambertian, these regions generally map to distinct three-dimensional illumination subspaces. We show that by identifying these subspaces, we can locate the regions and their visibilities, and in the process identify shadows. The result is an automatic method for uncalibrated Lambertian photometric stereo in the presence of shadows, both cast and attached.

Collaboration


Dive into the Kalyan Sunkavalli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Bonneel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge