Sunil Hadap
Adobe Systems
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sunil Hadap.
international conference on computer vision | 2013
Michael W. Tao; Sunil Hadap; Jitendra Malik; Ravi Ramamoorthi
Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift ones viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available simultaneously in a single capture. Previously, defocus could be achieved only through multiple image exposures focused at different depths, while correspondence cues needed multiple exposures at different viewpoints or multiple cameras, moreover, both cues could not easily be obtained together. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining both defocus and correspondence depth cues. We analyze the x-u 2D epipolar image (EPI), where by convention we assume the spatial x coordinate is horizontal and the angular u coordinate is vertical (our final algorithm uses the full 4D EPI). We show that defocus depth cues are obtained by computing the horizontal (spatial) variance after vertical (angular) integration, and correspondence depth cues by computing the vertical (angular) variance. We then show how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction.
Computer Graphics Forum | 2001
Sunil Hadap; Nadia Magnenat-Thalmann
In this paper we address the difficult problem of hair dynamics, particularly hair‐hair and hair‐air interactions. To model these interactions, we propose to consider hair volume as a continuum. Subsequently, we treat the interaction dynamics to be fluid dynamics. This proves to be a strong as well as viable approach for an otherwise very complex phenomenon. However, we retain the individual character of hair, which is vital to visually realistic rendering of hair animation. For that, we develop an elaborate model for stiffness and inertial dynamics of individual hair strand. Being a reduced coordinate formulation, the stiffness dynamics is numerically stable and fast. We then unify the continuum interaction dynamics and the individual hairs stiffness dynamics.
ieee visualization | 1999
Sunil Hadap; Endre Bangerter; Pascal Volino; Nadia Magnenat-Thalmann
This paper describes a method to simulate realistic wrinkles on clothes without fine mesh and large computational overheads. Cloth has very little in-plane deformations, as most of the deformations come from buckling. This can be looked at as area conservation property of cloth. The area conservation formulation of the method modulates the user defined wrinkle pattern, based on deformation of individual triangle. The methodology facilitates use of small in-plane deformation stiffnesses and a coarse mesh for the numerical simulation, this makes cloth simulation fast and robust. Moreover, the ability to design wrinkles (even on generalized deformable models) makes this method versatile for synthetic image generation. The method inspired from cloth wrinkling problem, being geometric in nature, can be extended to other wrinkling phenomena.
Archive | 2000
Sunil Hadap; Nadia Magnenat-Thalmann
In this paper, a new hair styling method is presented. The method draws on remarkable similarities between static hair shape and snapshots of fluid flow around an obstacle. Accordingly, the hair shape is modeled as streamlines of a fluid flow. The model offers an ability to control overall hair shape around the head. At the same time, it gives a possibility of modeling rich details such as waves and curls. Moreover, the continuum property of fluid flow gives a sound basis for modeling complex hair-hair interaction. Based on the model, we develop a fast, intuitive and easy-to-use hair styler. The designer can create intricate hairstyles quickly and easily, without worrying about hair-body and hair-hair interactions. The techniques are also relevant to interactive fluid flow modeling for computer graphics.
ACM Transactions on Graphics | 2014
Kevin Karsch; Kalyan Sunkavalli; Sunil Hadap; Nathan A. Carr; Hailin Jin; Rafael Fonte; Michael Sittig; David A. Forsyth
We present a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo, and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.
computer vision and pattern recognition | 2013
Hyeongwoo Kim; Hailin Jin; Sunil Hadap; In So Kweon
We present a novel method to separate specular reflection from a single image. Separating an image into diffuse and specular components is an ill-posed problem due to lack of observations. Existing methods rely on a specular-free image to detect and estimate specularity, which however may confuse diffuse pixels with the same hue but a different saturation value as specular pixels. Our method is based on a novel observation that for most natural images the dark channel can provide an approximate specular-free image. We also propose a maximum a posteriori formulation which robustly recovers the specular reflection and chromaticity despite of the hue-saturation ambiguity. We demonstrate the effectiveness of the proposed algorithm on real and synthetic examples. Experimental results show that our method significantly outperforms the state-of-the-art methods in separating specular reflection.
Computers & Graphics | 2010
Jorge Lopez-Moreno; Sunil Hadap; Erik Reinhard; Diego Gutierrez
Compositing an image of an object into another image is a frequently occurring task in both image processing and augmented reality. To ensure a seamless composition, it is often necessary to infer the light conditions of the image to adjust the illumination of the inserted object. Here, we present a novel algorithm for multiple light detection that leverages the limitations of the human visual system (HVS) described in the literature and measured by our own psychophysical study. Finally, we show an application of our method to both image compositing and synthetic object insertion.
computer vision and pattern recognition | 2017
Zhixin Shu; Ersin Yumer; Sunil Hadap; Kalyan Sunkavalli; Eli Shechtman; Dimitris Samaras
Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other — a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on in-the-wild images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.
international conference on computer graphics and interactive techniques | 2015
Menglei Chai; Linjie Luo; Kalyan Sunkavalli; Nathan A. Carr; Sunil Hadap; Kun Zhou
We propose a novel system to reconstruct a high-quality hair depth map from a single portrait photo with minimal user input. We achieve this by combining depth cues such as occlusions, silhouettes, and shading, with a novel 3D helical structural prior for hair reconstruction. We fit a parametric morphable face model to the input photo and construct a base shape in the face, hair and body regions using occlusion and silhouette constraints. We then estimate the normals in the hair region via a Shape-from-Shading-based optimization that uses the lighting inferred from the face model and enforces an adaptive albedo prior that models the typical color and occlusion variations of hair. We introduce a 3D helical hair prior that captures the geometric structure of hair, and show that it can be robustly recovered from the input photo in an automatic manner. Our system combines the base shape, the normals estimated by Shape from Shading, and the 3D helical hair prior to reconstruct high-quality 3D hair models. Our single-image reconstruction closely matches the results of a state-of-the-art multi-view stereo applied on a multi-view stereo dataset. Our technique can reconstruct a wide variety of hairstyles ranging from short to long and from straight to messy, and we demonstrate the use of our 3D hair models for high-quality portrait relighting, novel view synthesis and 3D-printed portrait reliefs.
ACM Transactions on Graphics | 2017
Zhixin Shu; Eli Shechtman; Dimitris Samaras; Sunil Hadap
Closed eyes and look-aways can ruin precious moments captured in photographs. In this article, we present a new framework for automatically editing eyes in photographs. We leverage a user’s personal photo collection to find a “good” set of reference eyes and transfer them onto a target image. Our example-based editing approach is robust and effective for realistic image editing. A fully automatic pipeline for realistic eye editing is challenging due to the unconstrained conditions under which the face appears in a typical photo collection. We use crowd-sourced human evaluations to understand the aspects of the target-reference image pair that will produce the most realistic results. We subsequently train a model that automatically selects the top-ranked reference candidate(s) by narrowing the gap in terms of pose, local contrast, lighting conditions, and even expressions. Finally, we develop a comprehensive pipeline of three-dimensional face estimation, image warping, relighting, image harmonization, automatic segmentation, and image compositing in order to achieve highly believable results. We evaluate the performance of our method via quantitative and crowd-sourced experiments.