Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shishir Shah is active.

Publication


Featured researches published by Shishir Shah.


Pattern Recognition | 1996

Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation*

Shishir Shah; Jake K. Aggarwal

This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-linear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images.


machine vision applications | 1997

Mobile robot navigation and scene modeling using stereo fish-eye lens system

Shishir Shah; Jake K. Aggarwal

Abstract. We present an autonomous mobile robot navigation system using stereo fish-eye lenses for navigation in an indoor structured environment and for generating a model of the imaged scene. The system estimates the three-dimensional (3D) position of significant features in the scene, and by estimating its relative position to the features, navigates through narrow passages and makes turns at corridor ends. Fish-eye lenses are used to provide a large field of view, which images objects close to the robot and helps in making smooth transitions in the direction of motion. Calibration is performed for the lens-camera setup and the distortion is corrected to obtain accurate quantitative measurements. A vision-based algorithm that uses the vanishing points of extracted segments from a scene in a few 3D orientations provides an accurate estimate of the robot orientation. This is used, in addition to 3D recovery via stereo correspondence, to maintain the robot motion in a purely translational path, as well as to remove the effects of any drifts from this path from each acquired image. Horizontal segments are used as a qualitative estimate of change in the motion direction and correspondence of vertical segment provides precise 3D information about objects close to the robot. Assuming detected linear edges in the scene as boundaries of planar surfaces, the 3D model of the scene is generated. The robot system is implemented and tested in a structured environment at our research center. Results from the robot navigation in real environments are presented and discussed.


ieee international conference on automatic face and gesture recognition | 1998

Partial face recognition using radial basis function networks

Kiminori Sato; Shishir Shah; Jake K. Aggarwal

The paper describes a face recognition system that uses partial face images (for example, eye, nose, and ear images) for input data. The recognition technique is based on using radial basis function (RBF) networks. As compared with using a standard backpropagation (BP) learning algorithm, the RBF networks are far superior for the face recognition task. From the experimental results of face recognition by partial face image data on a database of over 100 persons, we have achieved a recognition accuracy rate of 100% for the recognition of registered persons and a rejection rate of 100% for the rejection of unknown samples.


international conference on image processing | 1994

Depth estimation using stereo fish-eye lenses

Shishir Shah; Jake K. Aggarwal

This paper presents the estimation of depth in an indoor, structured environment based on a stereo setup consisting of two fish-eye lenses, with parallel optical axes, mounted on a robot platform. The use of fish-eye lenses provides for a large field of view to estimate better the depth of features very close to the lens. To extract significant information from the fish-eye lens images, we first correct for the distortion before using a special line detector, based on vanishing points, to extract significant features. We use a relaxation procedure to achieve correspondence between features in the left and right images. The process of prediction and recursive verification of the hypotheses is utilized to find a one-to-one correspondence. Experimental results obtained on several stereo images are presented, and an accuracy analysis is performed. Further, the algorithm is tested using a pair of wide-angle lenses, and the accuracy and difference in the spatial information obtained are compared.<<ETX>>


international conference on image processing | 1997

Multisensor integration for scene classification: an experiment in human form detection

Shishir Shah; Jake K. Aggarwal; Jayakrishnan Eledath; Joydeep Ghosh

This paper presents a system for classification of scenes using a multisensor integration framework. Indoor scenes are imaged using a visual and an infrared sensor and the images processed in three stages to perform classification of sensed objects into two classes: human and background. Finally, information from individual classifiers is integrated in order to obtain an improved classification performance. Details of feature extraction and classification using neural network combining a multi-Bayesian framework are presented. Segmentation of the imaged scene is performed using existing techniques such as texture analysis and histogram modeling. Classification results on real-world data are presented. The system represents a first step in the development of improved, robust classifiers based on the concepts of neural networks and multisensor integration.


asian conference on computer vision | 1995

Modeling Structured Environments Using Robot Vision

Shishir Shah; Jake K. Aggarwal

In this paper, we review the various methods for robust autonomous mobile robot navigation and scene modeling in structured environments. The techniques vary with availibility of a priori knowledge of the environment and the number of sensors with which the robot is equipped. The methods may be broadly classified into three categories: model based approaches, landmark based methods, and methods using information provided by the robot trajectory and its integration with sensor information for navigation and estimation of three dimensional (3D) position in the environment. Also, we describe the mobile robot ROBO-TEX, an autonomous mobile robot constructed in our laboratory. A successful implementation for constructing computer aided design (CAD) models of a structured scene as imaged by a single wide angle lens CCD camera while navigating is considered and evaluated. Finally, we discuss another navigation system that uses a stereo pair of fish-eye lenses, and discuss the merits of such a system over some of the other implementations for navigation and modeling of the scene.


international conference on image analysis and processing | 1997

Object Recognition and Performance Bounds

Jake K. Aggarwal; Shishir Shah

Object recognition is the classification of objects into one of many a priori known object classes. In addition, it may involve the estimation of the pose of the object and/or the track of the object in a sequence of images. Bayesian statistical pattern recognition, neural networks and rule based systems have been used to address the object recognition problem. In the case of statistical pattern recognition it is assumed that the a priori probability density functions are known or that they can be estimated from the given samples. For neural networks the samples may be used to train a network and the coefficients for the network function may be estimated. Whereas, in the case of the rule based system, rules may be given by an expert or they may be estimated from the samples. However, Bayesian framework provides a methodology for the estimation of error bounds on the performance of the recognition system. The paper discusses the Bayesian paradigm and contrasts its ability to provide performance bounds as compared to neural networks and rule based systems. Future direction of results on object recognition and performance bounds will also be discussed.


workshop on applications of computer vision | 1998

Robust automatic target detection/recognition system for second generation FLIR imagery

Huaibin Zhao; Shishir Shah; Jae Hun Choi; Dinesh Nair; Jake K. Aggarwal

Automatic target detection and recognition (ATD/R) is of crucial interest to the defense community. We present a robust ATD/R system developed at the CVRC at UT-Austin for recognition in second generation forward looking infrared (FLIR) images. An experiment conducted on 1930 FLIR images shows that this ATR system can achieve recognition with a high degree of accuracy and a low false alarm rate. This demo first presents a brief overview of the whole methodology, then shows the detailed procedures and temporary outputs step by step, by running this ATR system on typical low-contrast FLIR images. Results and examples are presented at the end of the demonstration.


asian conference on computer vision | 1998

Bayesian Paradigm for Recognition of Objects - Innovative Applications

Jake K. Aggarwal; Shishir Shah

This paper describes three innovative uses of the Bayesian paradigm for recognition of objects. A brief overview of the recognition problem and the use of the statistical approach are provided, along with the various stages for solving a problem. In addition, the paper presents formulations and results obtained by using Bayesian approaches in recent applications: human motion tracking, texture segmentation, and target recognition.


international conference on semantic computing | 1995

Autonomous Mobile Robot Navigation Using Fish-Eye Lenses

Shishir Shah; Jake K. Aggarwal

Collaboration


Dive into the Shishir Shah's collaboration.

Top Co-Authors

Avatar

Jake K. Aggarwal

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Dinesh Nair

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jae Hun Choi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge