Sumanta N. Pattanaik
University of Central Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sumanta N. Pattanaik.
international conference on computer graphics and interactive techniques | 1996
James A. Ferwerda; Sumanta N. Pattanaik; Peter Shirley; Donald P. Greenberg
In this paper we develop a computational model of visual adaptation for realistic image synthesis based on psychophysical experiments. The model captures the changes in threshold visibility, color appearance, visual acuity, and sensitivity over time that are caused by the visual system’s adaptation mechanisms. We use the model to display the results of global illumination simulations illuminated at intensities ranging from daylight down to starlight. The resulting images better capture the visual characteristics of scenes viewed over a wide range of illumination levels. Because the model is based on psychophysical data it can be used to predict the visibility and appearance of scene features. This allows the model to be used as the basis of perceptually-based error metrics for limiting the precision of global illumination computations. CR
international conference on computer graphics and interactive techniques | 1998
Sumanta N. Pattanaik; James A. Ferwerda; Mark D. Fairchild; Donald P. Greenberg
In this paper we develop a computational model of adaptation and spatial vision for realistic tone reproduction. The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. We incorporate the model into a tone reproduction operator that maps the vast ranges of radiances found in real and synthetic scenes into the small fixed ranges available on conventional display devices such as CRT’s and printers. The model allows the operator to address the two major problems in realistic tone reproduction: wide absolute range and high dynamic range scenes can be displayed; and the displayed images match our perceptions of the scenes at both threshold and suprathreshold levels to the degree possible given a particular display device. Although in this paper we apply our visual model to the tone reproduction problem, the model is general and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms. CR Categories: I.3.0 [Computer Graphics]: General;
international conference on computer graphics and interactive techniques | 1999
Mahesh Ramasubramanian; Sumanta N. Pattanaik; Donald P. Greenberg
We introduce a new concept for accelerating realistic image synthesis algorithms. At the core of this procedure is a novel physical error metric that correctly predicts the perceptual threshold for detecting artifacts in scene features. Built into this metric is a computational model of the human visual systems loss of sensitivity at high background illumination levels, high spatial frequencies, and high contrast levels (visual masking). An important feature of our model is that it handles the luminance-dependent processing and spatiallydependent processing independently. This allows us to precompute the expensive spatially-dependent component, making our model extremely efficient. We illustrate the utility of our procedure with global illumination algorithms used for realistic image synthesis. The expense of global illumination computations is many orders of magnitude higher than the expense of direct illumination computations and can greatly benefit by applying our perceptually based technique. Results show our method preserves visual quality while achieving significant computational gains in areas of images with high frequency texture patterns, geometric details, and lighting variations.
international conference on computer graphics and interactive techniques | 2004
Paul E. Debevec; Erik Reinhard; Greg Ward; Sumanta N. Pattanaik
Current display devices can display only a limited range of contrast and colors, which is one of the main reasons that most image acquisition, processing, and display techniques use no more than eight bits per color channel. This course outlines recent advances in high-dynamic-range imaging, from capture to display, that remove this restriction, thereby enabling images to represent the color gamut and dynamic range of the original scene rather than the limited subspace imposed by current monitor technology. This hands-on course teaches how high-dynamic-range images can be captured, the file formats available to store them, and the algorithms required to prepare them for display on low-dynamic-range display devices. The trade-offs at each stage, from capture to display, are assessed, allowing attendees to make informed choices about data-capture techniques, file formats, and tone-reproduction operators. The course also covers recent advances in image-based lighting, in which HDR images can be used to illuminate CG objects and realistically integrate them into real-world scenes. Through practical examples taken from photography and the film industry, it shows the vast improvements in image fidelity afforded by high-dynamic-range imaging.
international conference on computer graphics and interactive techniques | 2000
Sumanta N. Pattanaik; Jack Tumblin; Hector Yee; Donald P. Greenberg
Human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. This paper offers a new operator to include these appearance changes in animations or interactive real-time simulations, and to match a users visual responses to those the user would experience in a real-world scene. Large, abrupt changes in scene intensities can cause dramatic compression of visual responses, followed by a gradual recovery of normal vision. Asymmetric mechanisms govern these time-dependent adjustments, and offer adaptation to increased light that is much more rapid than adjustment to darkness. We derive a new tone reproduction operator that simulates these mechanisms. The operator accepts a stream of scene intensity frames and creates a stream of color display images. All operator components are derived from published quantitative measurements from physiology, psychophysics, color science, and photography. ept intentionally simple to allow fast computation, the operator is meant for use with real-time walk-through renderings, high dynamic range video cameras, and other interactive applications. We demonstrate its performance on both synthetically generated and acquired “real-world” scenes with large dynamic variations of illumination and contrast.
international conference on computer graphics and interactive techniques | 1997
James A. Ferwerda; Peter Shirley; Sumanta N. Pattanaik; Donald P. Greenberg
In this paper we develop a computational model of visual masking based on psychophysical data. The model predicts how the presence of one visual pattern affects the detectability of another. The model allows us to choose texture patterns for computer graphics images that hide the effects of faceting, banding, aliasing, noise and other visual artifacts produced by sources of error in graphics algorithms. We demonstrate the utility of the model by choosing a texture pattern to mask faceting artifacts caused by polygonal tesselation of a flat-shaded curved surface. The model predicts how changes in the contrast, spatial frequency, and orientation of the texture pattern, or changes in the tesselation of the surface will alter the masking effect. The model is general and has uses in geometric modeling, realistic image synthesis, scientific visualization, image compression, and image-based rendering. CR Categories: I.3.0 [Computer Graphics]: General;
international conference on computer graphics and interactive techniques | 1997
Donald P. Greenberg; Kenneth E. Torrance; Peter Shirley; James Arvo; Eric P. Lafortune; James A. Ferwerda; Bruce Walter; Ben Trumbore; Sumanta N. Pattanaik; Sing-Choong Foo
Our goal is to develop physically based lighting models and perceptually based rendering procedures for computer graphics that will produce synthetic images that are visually and measurably indistinguishable from real-world images. Fidelity of the physical simulation is of primary concern. Our research framework is subdivided into three sub-sections: the local light reflection model, the energy transport simulation, and the visual display algorithms. The first two subsections are physically based, and the last is perceptually based. We emphasize the comparisons between simulations and actual measurements, the difficulties encountered, and the need to utilize the vast amount of psychophysical research already conducted. Future research directions are enumerated. We hope that results of this research will help establish a more fundamental, scientific approach for future rendering algorithms. This presentation describes a chronology of past research in global illumination and how parts of our new system are currently being developed.
IEEE Computer Graphics and Applications | 2005
Ruifeng Xu; Sumanta N. Pattanaik; Charles E. Hughes
The raw size of a high-dynamic-range (HDR) image brings about problems in storage and transmission. Many bytes are wasted in data redundancy and perceptually unimportant information. To address this problem, researchers have proposed some preliminary algorithms to compress the data, like RGBE/XYZE, OpenEXR, LogLuv, and so on. HDR images can have a dynamic range of more than four orders of magnitude while conventional 8-bit images retain only two orders of magnitude of the dynamic range. This distinction between an HDR image and a conventional image leads to difficulties in using most existing image compressors. JPEG 2000 supports up to 16-bit integer data, so it can already provide image compression for most HDR images. In this article, we propose a JPEG 2000-based lossy image compression scheme for HDR images of all dynamic ranges. We show how to fit HDR encoding into a JPEG 2000 encoder to meet the HDR encoding requirement. To achieve the goal of minimum error in the logarithm domain, we map the logarithm of each pixel value into integer values and then send the results to a JPEG 2000 encoder. Our approach is basically a wavelet-based HDR still-image encoding method.
IEEE Transactions on Visualization and Computer Graphics | 2005
Jaroslav Krivánek; Pascal Gautron; Sumanta N. Pattanaik; Kadi Bouatouch
In this paper, we present a ray tracing-based method for accelerated global illumination computation in scenes with low-frequency glossy BRDFs. The method is based on sparse sampling, caching, and interpolating radiance on glossy surfaces. In particular, we extend the irradiance caching scheme proposed by Ward et al. (1988) to cache and interpolate directional incoming radiance instead of irradiance. The incoming radiance at a point is represented by a vector of coefficients with respect to a hemispherical or spherical basis. The surfaces suitable for interpolation are selected automatically according to the roughness of their BRDF. We also propose a novel method for computing translational radiance gradient at a point.
eurographics symposium on rendering techniques | 2004
Pascal Gautron; Jaroslav Krivánek; Sumanta N. Pattanaik; Kadi Bouatouch
This paper presents a new set of hemispherical basis functions dedicated to hemispherical data representation. These functions are derived from associated Legendre polynomials. We demonstrate the usefulness of this basis for representation of surface reflectance functions, rendering using environment maps and for efficient global illumination computation using radiance caching. We show that our basis is more appropriate for hemispherical functions than spherical harmonics. This basis can be efficiently combined with spherical harmonics in applications involving both hemispherical and spherical data.