Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ravi Ramamoorthi is active.

Publication


Featured researches published by Ravi Ramamoorthi.


international conference on computer graphics and interactive techniques | 2001

An efficient representation for irradiance environment maps

Ravi Ramamoorthi; Pat Hanrahan

We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to achieve average errors of only 1%. In other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. In fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. These observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering.


international conference on computer graphics and interactive techniques | 2001

A signal-processing framework for inverse rendering

Ravi Ramamoorthi; Pat Hanrahan

Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.


Journal of The Optical Society of America A-optics Image Science and Vision | 2001

On the relationship between radiance and irradiance: determining the illumination from images of a convex Lambertian object

Ravi Ramamoorthi; Pat Hanrahan

We present a theoretical analysis of the relationship between incoming radiance and irradiance. Specifically, we address the question of whether it is possible to compute the incident radiance from knowledge of the irradiance at all surface orientations. This is a fundamental question in computer vision and inverse radiative transfer. We show that the irradiance can be viewed as a simple convolution of the incident illumination, i.e., radiance and a clamped cosine transfer function. Estimating the radiance can then be seen as a deconvolution operation. We derive a simple closed-form formula for the irradiance in terms of spherical harmonic coefficients of the incident illumination and demonstrate that the odd-order modes of the lighting with order greater than 1 are completely annihilated. Therefore these components cannot be estimated from the irradiance, contradicting a theorem that is due to Preisendorfer. A practical realization of the radiance-from-irradiance problem is the estimation of the lighting from images of a homogeneous convex curved Lambertian surface of known geometry under distant illumination, since a Lambertian object reflects light equally in all directions proportional to the irradiance. We briefly discuss practical and physical considerations and describe a simple experimental test to verify our theoretical results.


international conference on computer graphics and interactive techniques | 2005

Efficiently combining positions and normals for precise 3D geometry

Diego Nehab; Szymon Rusinkiewicz; James Davis; Ravi Ramamoorthi

Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes or range images) or orientations (normal maps or bump maps). We present an algorithm that combines these two kinds of estimates to produce a new surface that approximates both. Our formulation is linear, allowing it to operate efficiently on complex meshes commonly used in graphics. It also treats high-and low-frequency components separately, allowing it to optimally combine outputs from data sources such as stereo triangulation and photometric stereo, which have different error-vs.-frequency characteristics. We demonstrate the ability of our technique to both recover high-frequency details and avoid low-frequency bias, producing surfaces that are more widely applicable than position or orientation data alone.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

Analytic PCA construction for theoretical analysis of lighting variability in images of a Lambertian object

Ravi Ramamoorthi

We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. We analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations.


international conference on computer graphics and interactive techniques | 2004

Triple product wavelet integrals for all-frequency relighting

Ren Ng; Ravi Ramamoorthi; Pat Hanrahan

This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such an environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While image-based and synthetic methods for real-time rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials). We propose a new mathematical and computational analysis of pre-computed light transport. We use factored forms, separately pre-computing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublinear-time algorithms for Haar wavelets, incorporating non-linear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques.


international conference on computer vision | 2013

Depth from Combining Defocus and Correspondence Using Light-Field Cameras

Michael W. Tao; Sunil Hadap; Jitendra Malik; Ravi Ramamoorthi

Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift ones viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available simultaneously in a single capture. Previously, defocus could be achieved only through multiple image exposures focused at different depths, while correspondence cues needed multiple exposures at different viewpoints or multiple cameras, moreover, both cues could not easily be obtained together. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining both defocus and correspondence depth cues. We analyze the x-u 2D epipolar image (EPI), where by convention we assume the spatial x coordinate is horizontal and the angular u coordinate is vertical (our final algorithm uses the full 4D EPI). We show that defocus depth cues are obtained by computing the horizontal (spatial) variance after vertical (angular) integration, and correspondence depth cues by computing the vertical (angular) variance. We then show how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction.


international conference on computer graphics and interactive techniques | 2002

Frequency space environment map rendering

Ravi Ramamoorthi; Pat Hanrahan

We present a new method for real-time rendering of objects with complex isotropic BRDFs under distant natural illumination, as specified by an environment map. Our approach is based on spherical frequency space analysis and includes three main contributions. Firstly, we are able to theoretically analyze required sampling rates and resolutions, which have traditionally been determined in an ad-hoc manner. We also introduce a new compact representation, which we call a spherical harmonic reflection map (SHRM), for efficient representation and rendering. Finally, we show how to rapidly prefilter the environment map to compute the SHRM---our frequency domain prefiltering algorithm is generally orders of magnitude faster than previous angular (spatial) domain approaches.


international conference on computer graphics and interactive techniques | 2004

Efficient BRDF importance sampling using a factored representation

Jason Lawrence; Szymon Rusinkiewicz; Ravi Ramamoorthi

High-quality Monte Carlo image synthesis requires the ability to importance sample realistic BRDF models. However, analytic sampling algorithms exist only for the Phong model and its derivatives such as Lafortune and Blinn-Phong. This paper demonstrates an importance sampling technique for a wide range of BRDFs, including complex analytic models such as Cook-Torrance and measured materials, which are being increasingly used for realistic image synthesis. Our approach is based on a compact factored representation of the BRDF that is optimized for sampling. We show that our algorithm consistently offers better efficiency than alternatives that involve fitting and sampling a Lafortune or Blinn-Phong lobe, and is more compact than sampling strategies based on tabulating the full BRDF. We are able to efficiently create images involving multiple measured and analytic BRDFs, under both complex direct lighting and global illumination.


international conference on computer graphics and interactive techniques | 2000

Efficient image-based methods for rendering soft shadows

Maneesh Agrawala; Ravi Ramamoorthi; Alan Heirich; Laurent Moll

We present two efficient imaged-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements for adding soft shadows to an image depend on image size and the number of lights, not geometric scene complexity. We also show that because area light sources are localized in space, soft shadow computations are particularly well suited to imaged-based rendering techniques. Our first approach—layered attenuation maps—achieves interactive rendering rates, but limits sampling flexibility, while our second method—coherence-based raytracing of depth images—is not interactive, but removes the limitations on sampling and yields high quality images at a fraction of the cost of conventional raytracers. Combining the two algorithms allows for rapid previewing followed by efficient high-quality rendering.

Collaboration


Dive into the Ravi Ramamoorthi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ling-Qi Yan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ting-Chun Wang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiamin Bai

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge