Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Srinivasa G. Narasimhan is active.

Publication


Featured researches published by Srinivasa G. Narasimhan.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Contrast restoration of weather degraded images

Srinivasa G. Narasimhan; Shree K. Nayar

Images of outdoor scenes captured in bad weather suffer from poor contrast. Under bad weather conditions, the light reaching a camera is severely scattered by the atmosphere. The resulting decay in contrast varies across the scene and is exponential in the depths of scene points. Therefore, traditional space invariant image processing techniques are not sufficient to remove weather effects from images. We present a physics-based model that describes the appearances of scenes in uniform bad weather conditions. Changes in intensities of scene points under different weather conditions provide simple constraints to detect depth discontinuities in the scene and also to compute scene structure. Then, a fast algorithm to restore scene contrast is presented. In contrast to previous techniques, our weather removal algorithm does not require any a priori scene structure, distributions of scene reflectances, or detailed knowledge about the particular weather condition. All the methods described in this paper are effective under a wide range of weather conditions including haze, mist, fog, and conditions arising due to other aerosols. Further, our methods can be applied to gray scale, RGB color, multispectral and even IR images. We also extend our techniques to restore contrast of scenes with moving objects, captured using a video camera.


International Journal of Computer Vision | 2002

Vision and the Atmosphere

Srinivasa G. Narasimhan; Shree K. Nayar

Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from “bad” weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow.We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics, and identify effects caused by bad weather that can be turned to our advantage. Since the atmosphere modulates the information carried from a scene point to the observer, it can be viewed as a mechanism of visual information coding. We exploit two fundamental scattering models and develop methods for recovering pertinent scene properties, such as three-dimensional structure, from one or two images taken under poor weather conditions.Next, we model the chromatic effects of the atmospheric scattering and verify it for fog and haze. Based on this chromatic model we derive several geometric constraints on scene color changes caused by varying atmospheric conditions. Finally, using these constraints we develop algorithms for computing fog or haze color, depth segmentation, extracting three-dimensional structure, and recovering “clear day” scene colors, from two or more images taken under different but unknown weather conditions.


computer vision and pattern recognition | 2001

Instant dehazing of images using polarization

Yoav Y. Schechner; Srinivasa G. Narasimhan; Shree K. Nayar

We present an approach to easily remove the effects of haze from images. It is based on the fact that usually airlight scattered by atmospheric particles is partially polarized. Polarization filtering alone cannot remove the haze effects, except in restricted situations. Our method, however, works under a wide range of atmospheric and viewing conditions. We analyze the image formation process, taking into account polarization effects of atmospheric scattering. We then invert the process to enable the removal of haze from images. The method can be used with as few as two images taken through a polarizer at different orientations. This method works instantly, without relying on changes of weather conditions. We present experimental results of complete dehazing in far from ideal conditions for polarization filtering. We obtain a great improvement of scene contrast and correction of color. As a by product, the method also yields a range (depth) map of the scene, and information about properties of the atmospheric particles.


International Journal of Computer Vision | 2010

Analysis of Rain and Snow in Frequency Space

Peter Barnum; Srinivasa G. Narasimhan; Takeo Kanade

Dynamic weather such as rain and snow causes complex spatio-temporal intensity fluctuations in videos. Such fluctuations can adversely impact vision systems that rely on small image features for tracking, object detection and recognition. While these effects appear to be chaotic in space and time, we show that dynamic weather has a predictable global effect in frequency space. For this, we first develop a model of the shape and appearance of a single rain or snow streak in image space. Detecting individual streaks is difficult even with an accurate appearance model, so we combine the streak model with the statistical characteristics of rain and snow to create a model of the overall effect of dynamic weather in frequency space. Our model is then fit to a video and is used to detect rain or snow streaks first in frequency space, and the detection result is then transferred to image space. Once detected, the amount of rain or snow can be reduced or increased. We demonstrate that our frequency analysis allows for greater accuracy in the removal of dynamic weather and in the performance of feature extraction than previous pixel-based or patch-based methods. We also show that unlike previous techniques, our approach is effective for videos with both scene and camera motions.


european conference on computer vision | 2010

Detecting ground shadows in outdoor consumer photographs

Jean-François Lalonde; Alexei A. Efros; Srinivasa G. Narasimhan

Detecting shadows from images can significantly improve the performance of several vision tasks such as object detection and tracking. Recent approaches have mainly used illumination invariants which can fail severely when the qualities of the images are not very good, as is the case for most consumer-grade photographs, like those on Google or Flickr. We present a practical algorithm to automatically detect shadows cast by objects onto the ground, from a single consumer photograph. Our key hypothesis is that the types of materials constituting the ground in outdoor scenes is relatively limited, most commonly including asphalt, brick, stone, mud, grass, concrete, etc. As a result, the appearances of shadows on the ground are not as widely varying as general shadows and thus, can be learned from a labelled set of images. Our detector consists of a three-tier process including (a) training a decision tree classifier on a set of shadow sensitive features computed around each image edge, (b) a CRF-based optimization to group detected shadow edges to generate coherent shadow contours, and (c) incorporating any existing classifier that is specifically trained to detect grounds in images. Our results demonstrate good detection accuracy (85%) on several challenging images. Since most objects of interest to vision applications (like pedestrians, vehicles, signs) are attached to the ground, we believe that our detector can find wide applicability.


international conference on computer vision | 2005

Structured light in scattering media

Srinivasa G. Narasimhan; Shree K. Nayar; Bo Sun; Sanjeev J. Koppal

Virtually all structured light methods assume that the scene and the sources are immersed in pure air and that light is neither scattered nor absorbed. Recently, however, structured lighting has found growing application in underwater and aerial imaging, where scattering effects cannot be ignored. In this paper, we present a comprehensive analysis of two representative methods - light stripe range scanning and photometric stereo - in the presence of scattering. For both methods, we derive physical models for the appearances of a surface immersed in a scattering medium. Based on these models, we present results on (a) the condition for object detectability in light striping and (b) the number of sources required for photometric stereo. In both cases, we demonstrate that while traditional methods fail when scattering is significant, our methods accurately recover the scene (depths, normals, albedos) as well as the properties of the medium. These results are in turn used to restore the appearances of scenes as if they were captured in clear air. Although we have focused on light striping and photometric stereo, our approach can also be extended to other methods such as grid coding, gated and active polarization imaging.


international conference on computer graphics and interactive techniques | 2005

A practical analytic single scattering model for real time rendering

Bo Sun; Ravi Ramamoorthi; Srinivasa G. Narasimhan; Shree K. Nayar

We consider real-time rendering of scenes in participating media, capturing the effects of light scattering in fog, mist and haze. While a number of sophisticated approaches based on Monte Carlo and finite element simulation have been developed, those methods do not work at interactive rates. The most common real-time methods are essentially simple variants of the OpenGL fog model. While easy to use and specify, that model excludes many important qualitative effects like glows around light sources, the impact of volumetric scattering on the appearance of surfaces such as the diffusing of glossy highlights, and the appearance under complex lighting such as environment maps. In this paper, we present an alternative physically based approach that captures these effects while maintaining real time performance and the ease-of-use of the OpenGL fog model. Our method is based on an explicit analytic integration of the single scattering light transport equations for an isotropic point light source in a homogeneous participating medium. We can implement the model in modern programmable graphics hardware using a few small numerical lookup tables stored as texture maps. Our model can also be easily adapted to generate the appearances of materials with arbitrary BRDFs, environment map lighting, and precomputed radiance transfer methods, in the presence of participating media. Hence, our techniques can be widely used in real-time rendering.


computer vision and pattern recognition | 2001

Removing weather effects from monochrome images

Srinivasa G. Narasimhan; Shree K. Nayar

Images of outdoor scenes captured in bad weather suffer from poor contrast. Under bad weather conditions, the light reaching a camera is severely scattered by the atmosphere. The resulting decay in contrast varies across the scene and is exponential in the depths of scene points. Therefore, traditional space invariant image processing techniques are not sufficient to remove weather effects from images. In this paper, we present a fast physics-based method to compute scene structure and hence restore contrast of the scene from two or more images taken in bad weather In contrast to previous techniques, our method does not require any a priori weather-specific or scene information, and is effective under a wide range of weather conditions including haze, mist, fog and other aerosols. Further, our method can be applied to gray-scale, RGB color, multi-spectral and even IR images. We also extend the technique to restore contrast of scenes with moving objects, captured using a video camera.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Enhancing resolution along multiple imaging dimensions using assorted pixels

Srinivasa G. Narasimhan; Shree K. Nayar

Multisampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green, and blue spectral filters found in most solid-state color cameras is one example of multisampled imaging. We briefly describe how multisampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and, hence, image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multisampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are 1) traditional color imaging with a mosaic of color filters, 2) high dynamic range monochrome imaging using a mosaic of exposure filters, and 3) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.


international conference on computer graphics and interactive techniques | 2006

Acquiring scattering properties of participating media by dilution

Srinivasa G. Narasimhan; Mohit Gupta; Craig Donner; Ravi Ramamoorthi; Shree K. Nayar; Henrik Wann Jensen

The visual world around us displays a rich set of volumetric effects due to participating media. The appearance of these media is governed by several physical properties such as particle densities, shapes and sizes, which must be input (directly or indirectly) to a rendering algorithm to generate realistic images. While there has been significant progress in developing rendering techniques (for instance, volumetric Monte Carlo methods and analytic approximations), there are very few methods that measure or estimate these properties for media that are of relevance to computer graphics. In this paper, we present a simple device and technique for robustly estimating the properties of a broad class of participating media that can be either (a) diluted in water such as juices, beverages, paints and cleaning supplies, or (b) dissolved in water such as powders and sugar/salt crystals, or (c) suspended in water such as impurities. The key idea is to dilute the concentrations of the media so that single scattering effects dominate and multiple scattering becomes negligible, leading to a simple and robust estimation algorithm. Furthermore, unlike previous approaches that require complicated or separate measurement setups for different types or properties of media, our method and setup can be used to measure media with a complete range of absorption and scattering properties from a single HDR photograph. Once the parameters of the diluted medium are estimated, a volumetric Monte Carlo technique may be used to create renderings of any medium concentration and with multiple scattering. We have measured the scattering parameters of forty commonly found materials, that can be immediately used by the computer graphics community. We can also create realistic images of combinations or mixtures of the original measured materials, thus giving the user a wide flexibility in making realistic images of participating media.

Collaboration


Dive into the Srinivasa G. Narasimhan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yuandong Tian

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Supreeth Achar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Minh Vo

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Peter Barnum

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge