Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shree K. Nayar is active.

Publication


Featured researches published by Shree K. Nayar.


International Journal of Computer Vision | 1995

Visual learning and recognition of 3-D objects from appearance

Hiroshi Murase; Shree K. Nayar

The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the objects pose in the image.A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.


ACM Transactions on Graphics | 1999

Reflectance and texture of real-world surfaces

Kristin J. Dana; Bram van Ginneken; Shree K. Nayar; Jan J. Koenderink

In this work, we investigate the visual appearance of real-world surfaces and the dependence of appearance on the geometry of imaging conditions. We discuss a new texture representation called the BTF (bidirectional texture function) which captures the variation in texture with illumination and viewing direction. We present a BTF database with image textures from over 60 different samples, each observed with over 200 different combinations of viewing and illumination directions. We describe the methods involved in collecting the database as well as the importqance and uniqueness of this database for computer graphics. A related quantity to the BTF is the familiar BRDF (bidirectional reflectance distribution function). The measurement methods involved in the BTF database are conducive to simultaneous measurement of the BRDF. Accordingly, we also present a BRDF database with reflectance measurements for over 60 different samples, each observed with over 200 different combinations of viewing and illumination directions. Both of these unique databases are publicly available and have important implications for computer graphics.


international conference on computer vision | 2009

Attribute and simile classifiers for face verification

Neeraj Kumar; Alexander C. Berg; Peter N. Belhumeur; Shree K. Nayar

We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1994

Shape from focus

Shree K. Nayar; Yasuo Nakagawa

The shape from focus method presented here uses different focus levels to obtain a sequence of object images. The sum-modified-Laplacian (SML) operator is developed to provide local measures of the quality of image focus. The operator is applied to the image sequence to determine a set of focus measures at each image point. A depth estimation algorithm interpolates a small number of focus measure values to obtain accurate depth estimates. A fully automated shape from focus system has been implemented using an optical microscope and tested on a variety of industrial samples. Experimental results are presented that demonstrate the accuracy and robustness of the proposed method. These results suggest shape from focus to be an effective approach for a variety of challenging visual inspection tasks. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Contrast restoration of weather degraded images

Srinivasa G. Narasimhan; Shree K. Nayar

Images of outdoor scenes captured in bad weather suffer from poor contrast. Under bad weather conditions, the light reaching a camera is severely scattered by the atmosphere. The resulting decay in contrast varies across the scene and is exponential in the depths of scene points. Therefore, traditional space invariant image processing techniques are not sufficient to remove weather effects from images. We present a physics-based model that describes the appearances of scenes in uniform bad weather conditions. Changes in intensities of scene points under different weather conditions provide simple constraints to detect depth discontinuities in the scene and also to compute scene structure. Then, a fast algorithm to restore scene contrast is presented. In contrast to previous techniques, our weather removal algorithm does not require any a priori scene structure, distributions of scene reflectances, or detailed knowledge about the particular weather condition. All the methods described in this paper are effective under a wide range of weather conditions including haze, mist, fog, and conditions arising due to other aerosols. Further, our methods can be applied to gray scale, RGB color, multispectral and even IR images. We also extend our techniques to restore contrast of scenes with moving objects, captured using a video camera.


computer vision and pattern recognition | 1999

Radiometric self calibration

Tomoo Mitsunaga; Shree K. Nayar

A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F-number settings on an inexpensive lens) are sufficient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using 100 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras.


computer vision and pattern recognition | 1997

Catadioptric omnidirectional camera

Shree K. Nayar

Conventional video cameras have limited fields of view that make them restrictive in a variety of vision applications. There are several ways to enhance the field of view of an imaging system. However, the entire imaging system must have a single effective viewpoint to enable the generation of pure perspective images from a sensed image. A new camera with a hemispherical field of view is presented. Two such cameras can be placed back-to-back, without violating the single viewpoint constraint, to arrive at a truly omnidirectional sensor. Results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification. The paper concludes with a discussion on the spatial resolution of the proposed camera.


International Journal of Computer Vision | 1999

A Theory of Single-Viewpoint Catadioptric Image Formation

Simon Baker; Shree K. Nayar

Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design goal for catadioptric sensors is choosing the shapes of the mirrors in a way that ensures that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We describe all of the solutions in detail, including the degenerate ones, with reference to many of the catadioptric systems that have been proposed in the literature. In addition, we derive a simple expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the cameras used to construct it. Moreover, we include detailed analysis of the defocus blur caused by the use of a curved mirror in a catadioptric sensor.


International Journal of Computer Vision | 2002

Vision and the Atmosphere

Srinivasa G. Narasimhan; Shree K. Nayar

Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from “bad” weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow.We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics, and identify effects caused by bad weather that can be turned to our advantage. Since the atmosphere modulates the information carried from a scene point to the observer, it can be viewed as a mechanism of visual information coding. We exploit two fundamental scattering models and develop methods for recovering pertinent scene properties, such as three-dimensional structure, from one or two images taken under poor weather conditions.Next, we model the chromatic effects of the atmospheric scattering and verify it for fog and haze. Based on this chromatic model we derive several geometric constraints on scene color changes caused by varying atmospheric conditions. Finally, using these constraints we develop algorithms for computing fog or haze color, depth segmentation, extracting three-dimensional structure, and recovering “clear day” scene colors, from two or more images taken under different but unknown weather conditions.


computer vision and pattern recognition | 2000

High dynamic range imaging: spatially varying pixel exposures

Shree K. Nayar; Tomoo Mitsunaga

While real scenes produce a wide range of brightness variations, vision systems use low dynamic range image detectors that typically provide 8 bits of brightness data at each pixel. The resulting low quality images greatly limit what vision can accomplish today. This paper proposes a very simple method for significantly enhancing the dynamic range of virtually any imaging system. The basic principle is to simultaneously sample the spatial and exposure dimensions of image irradiance. One of several ways to achieve this is by placing an optical mask adjacent to a conventional image detector array. The mask has a pattern with spatially varying transmittance, thereby giving adjacent pixels on the detector different exposures to the scene. The captured image is mapped to a high dynamic range image using an efficient image reconstruction algorithm. The end result is an imaging system that can measure a very wide range of scene radiance and produce a substantially larger number of brightness levels, with a slight reduction in spatial resolution. We conclude with several examples of high dynamic range images computed using spatially varying pixel exposures.

Collaboration


Dive into the Shree K. Nayar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge