Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satya P. Mallick is active.

Publication


Featured researches published by Satya P. Mallick.


computer vision and pattern recognition | 2005

Beyond Lambert: reconstructing specular surfaces using color

Satya P. Mallick; Todd E. Zickler; David J. Kriegman; Peter N. Belhumeur

We present a photometric stereo method for non-diffuse materials that does not require an explicit reflectance model or reference object. By computing a data-dependent rotation of RGB color space, we show that the specular reflection effects can be separated from the much simpler, diffuse (approximately Lambertian) reflection effects for surfaces that can be modeled with dichromatic reflectance. Images in this transformed color space are used to obtain photometric reconstructions that are independent of the specular reflectance. In contrast to other methods for highlight removal based on dichromatic color separation (e.g., color histogram analysis and/or polarization), we do not explicitly recover the specular and diffuse components of an image. Instead, we simply find a transformation of color space that yields more direct access to shape information. The method is purely local and is able to handle surfaces with arbitrary texture.


computer vision and pattern recognition | 2007

Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization

Neil Gordon Alldrin; Satya P. Mallick; David J. Kriegman

It is well known in the photometric stereo literature that uncalibrated photometric stereo, where light source strength and direction are unknown, can recover the surface geometry of a Lambertian object up to a 3-parameter linear transform known as the generalized bas relief (GBR) ambiguity. Many techniques have been proposed for resolving the GBR ambiguity, typically by exploiting prior knowledge of the light sources, the object geometry, or non-Lambertian effects such as specularities. A less celebrated consequence of the GBR transformation is that the albedo at each surface point is transformed along with the geometry. Thus, it should be possible to resolve the GBR ambiguity by exploiting priors on the albedo distribution. To the best of our knowledge, the only time the albedo distribution has been used to resolve the GBR is in the case of uniform albedo. We propose a new prior on the albedo distribution : that the entropy of the distribution should be low. This prior is justified by the fact that many objects in the real-world are composed of a small finite set of albedo values.


International Journal of Computer Vision | 2008

Color Subspaces as Photometric Invariants

Todd E. Zickler; Satya P. Mallick; David J. Kriegman; Peter N. Belhumeur

Abstract Complex reflectance phenomena such as specular reflections confound many vision problems since they produce image ‘features’ that do not correspond directly to intrinsic surface properties such as shape and spectral reflectance. A common approach to mitigate these effects is to explore functions of an image that are invariant to these photometric events. In this paper we describe a class of such invariants that result from exploiting color information in images of dichromatic surfaces. These invariants are derived from illuminant-dependent ‘subspaces’ of RGB color space, and they enable the application of Lambertian-based vision techniques to a broad class of specular, non-Lambertian scenes. Using implementations of recent algorithms taken from the literature, we demonstrate the practical utility of these invariants for a wide variety of applications, including stereo, shape from shading, photometric stereo, material-based segmentation, and motion estimation.


european conference on computer vision | 2006

Specularity removal in images and videos: a PDE approach

Satya P. Mallick; Todd E. Zickler; Peter N. Belhumeur; David J. Kriegman

We present a unified framework for separating specular and diffuse reflection components in images and videos of textured scenes. This can be used for specularity removal and for independently processing, filtering, and recombining the two components. Beginning with a partial separation provided by an illumination-dependent color space, the challenge is to complete the separation using spatio-temporal information. This is accomplished by evolving a partial differential equation (PDE) that iteratively erodes the specular component at each pixel. A family of PDEs appropriate for differing image sources (still images vs. videos), differing prior information (e.g., highly vs. lightly textured scenes), or differing prior computations (e.g., optical flow) is introduced. In contrast to many other methods, explicit segmentation and/or manual intervention are not required. We present results on high-quality images and video acquired in the laboratory in addition to images taken from the Internet. Results on the latter demonstrate robustness to low dynamic range, JPEG artifacts, and lack of knowledge of illuminant color. Empirical comparison to physical removal of specularities using polarization is provided. Finally, an application termed dichromatic editing is presented in which the diffuse and the specular components are processed independently to produce a variety of visual effects.


european conference on computer vision | 2004

On Refractive Optical Flow

Sameer Agarwal; Satya P. Mallick; David J. Kriegman; Serge J. Belongie

This paper presents a novel generalization of the optical flow equation to the case of refraction, and it describes a method for recovering the refractive structure of an object from a video sequence acquired as the background behind the refracting object moves. By structure here we mean a representation of how the object warps and attenuates (or amplifies) the light passing through it. We distinguish between the cases when the background motion is known and unknown. We show that when the motion is unknown, the refractive structure can only be estimated up to a six-parameter family of solutions without additional sources of information. Methods for solving for the refractive structure are described in both cases. The performance of the algorithm is demonstrated on real data, and results of applying the estimated refractive structure to the task of environment matting and compositing are presented.


computer vision and pattern recognition | 2007

Isotropy, Reciprocity and the Generalized Bas-Relief Ambiguity

Ping Tan; Satya P. Mallick; Long Quan; David J. Kriegman; Todd E. Zickler

A set of images of a Lambertian surface under varying lighting directions defines its shape up to a three-parameter generalized bas-relief (GBR) ambiguity. In this paper, we examine this ambiguity in the context of surfaces having an additive non-Lambertian reflectance component, and we show that the GBR ambiguity is resolved by any non-Lambertian reflectance function that is isotropic and spatially invariant. The key observation is that each point on a curved surface under directional illumination is a member of a family of points that are in isotropic or reciprocal configurations. We show that the GBR can be resolved in closed form by identifying members of these families in two or more images. Based on this idea, we present an algorithm for recovering full Euclidean geometry from a set of uncalibrated photometric stereo images, and we evaluate it empirically on a number of examples.


computer vision and pattern recognition | 2006

Structure and View Estimation for Tomographic Reconstruction: A Bayesian Approach

Satya P. Mallick; Sameer Agarwal; David J. Kriegman; Serge J. Belongie; Bridget Carragher; Clinton S. Potter

This paper addresses the problem of reconstructing the density of a scene from multiple projection images produced by modalities such as x-ray, electron microscopy, etc. where an image value is related to the integral of the scene density along a 3D line segment between a radiation source and a point on the image plane. While computed tomography (CT) addresses this problem when the absolute orientation of the image plane and radiation source directions are known, this paper addresses the problem when the orientations are unknown - it is akin to the structure-from-motion (SFM) problem when the extrinsic camera parameters are unknown. We study the problem within the context of reconstructing the density of protein macro-molecules in Cryogenic Electron Microscopy (cryo-EM), where images are very noisy and existing techniques use several thousands of images. In a non-degenerate configuration, the viewing planes corresponding to two projections, intersect in a line in 3D. Using the geometry of the imaging setup, it is possible to determine the projections of this 3D line on the two image planes. In turn, the problem can be formulated as a type of orthographic structure from motion from line correspondences where the line correspondences between two views are unreliable due to image noise. We formulate the task as the problem of denoising a correspondence matrix and present a Bayesian solution to it. Subsequently, the absolute orientation of each projection is determined followed by density reconstruction. We show results on cryo-EM images of proteins and compare our results to that of Electron Micrograph Analysis (EMAN) - a widely used reconstruction tool in cryo-EM.


intelligent vehicles symposium | 2003

Real-time driver affect analysis and tele-viewing system

Joel C. McCall; Satya P. Mallick; Mohan M. Trivedi

This paper deals with the development of novel sensory systems and interfaces to enhance the safety of a driver who may be using a cell phone. It describes a system for real time affect analysis and Tele-viewing in order to bring the context of the driver to remote users. The system is divided into two modules. The affect recognition module recognizes six emotional states of the driver. The face-modeling module generates a 3D model of the drivers face using two images and synthesizes the emotions on it. Depending on bandwidth, either a raw video, 3D warped model, or an iconic representation is sent to the remote user.


international conference on multimedia and expo | 2003

Parametric face modeling and affect synthesis

Satya P. Mallick; Mohan M. Trivedi

In this paper, we present a fast algorithm for automatically generating a 3D model of a human face and synthesizing affects on it. This work is a step towards low bandwidth virtual tele-presence using avatars. The idea is to generate a 3D model of a persons face and animate different emotions on the face at a remote observers end, thus eliminating the need to send raw video. This research is part of a larger project, which deals with development of novel sensory systems and interfaces to enhance safety of a driver. The proposed algorithm uses a stereo pair of images of a drivers face and creates a 3D mesh model by deforming and texture mapping a predefined 3D mesh.


international conference on computer graphics and interactive techniques | 2006

Dichromatic separation: specularity removal and editing

Satya P. Mallick; Todd E. Zickler; Peter N. Belhumeur; David J. Kriegman

The reflectance of a wide variety of materials (plastics, plant leaves, glazed ceramics, human skin, fruits and vegetables, paper, leather, etc.) can be described as a linear combination of specular and diffuse components, and for many applications we can benefit from separating an image in this way. Figures 2 and 3, for example, depict photo-editing and e-cosmetic applications in which visual effects are simulated by independently processing separated diffuse and specular image layers. Similarly, specular/diffuse separation plays an important role for image-based modeling applications in which diffuse (specular-free) texture maps are sought. Separation of the two reflectance components in a single image is an ill-posed problem. In the past its solution has required the manual identification of highlight regions, the use of special acquisition systems (e.g., polarizing filters), or restrictive assumptions about the scene (e.g., untextured surfaces). Recently, we have introduced a method for specular/diffuse separation that overcomes many of these limitations [Mallick et al. 2006], and in this sketch, we build on this work, showing how it can be used for dichromatic editing — processing and recombining the two reflectance components for various visual effects. We present results on high-quality images and videos acquired in the laboratory in addition to images taken from the Internet. Results on the latter demonstrate robustness to low dynamic range, JPEG artifacts, and lack of knowledge of illuminant color. Similar to most existing techniques for specular/diffuse separation, our approach is based on exploiting color differences between specular and diffuse reflections as described by Shafer’s dichromatic model. According to this model, the color of the specular component at each surface point is the same as that of the illuminant (S), while the color of the diffuse component depends on the reflectance of the surface and can change from point to point.

Collaboration


Dive into the Satya P. Mallick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denis Fellmann

Scripps Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anchi Cheng

Scripps Research Institute

View shared research outputs
Top Co-Authors

Avatar

Joel Quispe

Scripps Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge