Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sridhar Lakshmanan is active.

Publication


Featured researches published by Sridhar Lakshmanan.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

Object matching using deformable templates

Anil K. Jain; Yu Zhong; Sridhar Lakshmanan

We propose a general object localization and retrieval scheme based on object shape using deformable templates. Prior knowledge of an object shape is described by a prototype template which consists of the representative contour/edges, and a set of probabilistic deformation transformations on the template. A Bayesian scheme, which is based on this prior knowledge and the edge information in the input image, is employed to find a match between the deformed template and objects in the image. Computational efficiency is achieved via a coarse-to-fine implementation of the matching algorithm. Our method has been applied to retrieve objects with a variety of shapes from images with complex background. The proposed scheme is invariant to location, rotation, and moderate scale changes of the template.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

Simultaneous parameter estimation and segmentation of Gibbs random fields using simulated annealing

Sridhar Lakshmanan; Haluk Derin

An adaptive segmentation algorithm is developed which simultaneously estimates the parameters of the underlying Gibbs random field (GRF)and segments the noisy image corrupted by additive independent Gaussian noise. The algorithm, which aims at obtaining the maximum a posteriori (MAP) segmentation is a simulated annealing algorithm that is interrupted at regular intervals for estimating the GRF parameters. Maximum-likelihood (ML) estimates of the parameters based on the current segmentation are used to obtain the next segmentation. It is proven that the parameter estimates and the segmentations converge in distribution to the ML estimate of the parameters and the MAP segmentation with those parameter estimates, respectively. Due to computational difficulties, however, only an approximate version of the algorithm is implemented. The approximate algorithm is applied on several two- and four-region images with different noise levels and with first-order and second-order neighborhoods. >


Pattern Recognition | 1997

Object detection using gabor filters

Anil K. Jain; Nalini K. Ratha; Sridhar Lakshmanan

Abstract This paper pertains to the detection of objects located in complex backgrounds. A feature-based segmentation approach to the object detection problem is pursued, where the features are computed over multiple spatial orientations and frequencies. The method proceeds as follows: a given image is passed through a bank of even-symmetric Gabor filters. A selection of these filtered images is made and each (selected) filtered image is subjected to a nonlinear (sigmoidal like) transformation. Then, a measure of texture energy is computed in a window around each transformed image pixel. The texture energy (“Gabor features”) and their spatial locations are inputted to a squared-error clustering algorithm. This clustering algorithm yields a segmentation of the original image—it assigns to each pixel in the image a cluster label that identifies the amount of mean local energy the pixel possesses across different spatial orientations and frequencies. The method is applied to a number of visual and infrared images, each one of which contains one or more objects. The region corresponding to the object is usually segmented correctly, and a unique signature of “Gabor features” is typically associated with the segment containing the object(s) of interest. Experimental results are provided to illustrate the usefulness of this object detection method in a number of problem domains. These problems arise in IVHS, military reconnaissance, fingerprint analysis, and image database query.


international conference on robotics and automation | 1999

LANA: a lane extraction algorithm that uses frequency domain features

Chris Kreucher; Sridhar Lakshmanan

This paper introduces a new algorithm, called lane-finding in another domain (LANA), for detecting lane markers in images acquired from a forward-looking vehicle-mounted camera. The method is based on a novel set of frequency domain features that capture relevant information concerning the strength and orientation of spatial edges. The frequency domain features are combined with a deformable template prior, in order to detect the lane markers of interest. Experimental results that illustrate the performance of this algorithm on images with varying lighting and environmental conditions, shadowing, lane occlusion(s), solid and dashed lines, etc. are presented. LANA detects lane markers well under a very large and varied collection of roadway images. A comparison is drawn between this frequency feature-based LANA algorithm and the spatial feature-based LOIS lane detection algorithm. This comparison is made from experimental, computational and methodological standpoints.


intelligent vehicles symposium | 2003

A motion and shape-based pedestrian detection algorithm

Hadi Elzein; Sridhar Lakshmanan; Paul Watta

In this paper we investigate a vision-based pedestrian detection algorithm which can be used in the design of intelligent vehicle systems. The input to the algorithm is video data obtained from a camera mounted on the vehicle. In the proposed method, a wavelet transform is computed on the video frames, and multistage template matching is used to determine whether or not a pedestrian is present in the frame. Motion detection and localization is used to reduce the computational requirements. Experimental results are presented for several different video sequences. The results show that this method is able to reliably detect pedestrians in cluttered scenes.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

A deformable template approach to detecting straight edges in radar images

Sridhar Lakshmanan; David Grimmer

This paper addresses the problem of locating two straight and parallel road edges in images that are acquired from a stationary millimeter-wave radar platform positioned near ground-level. A fast, robust, and completely data-driven Bayesian solution to this problem is developed, and it has applications in automotive vision enhancement. The method employed in this paper makes use of a deformable template model of the expected road edges, a two-parameter log-normal model of the ground-level millimeter-wave (GLEM) radar imaging process, a maximum a posteriori (MAP) formulation of the straight edge detection problem, and a Monte Carlo algorithm to maximize the posterior density. Experimental results are presented by applying the method on GLEM radar images of actual roads. The performance of the method is assessed against ground truth for a variety of road scenes.


Image and Vision Computing | 2000

CLARK, A HETEROGENEOUS SENSOR FUSION METHOD FOR FINDING LANES AND OBSTACLES

Michel Beauvais; Sridhar Lakshmanan

Abstract This paper describes Combined Likelihood Adding Radar Knowledge (CLARK), a new method for detecting lanes and obstacles by fusing information from two forward-looking vehicle mounted sensors—vision and radar. CLARK has three stages: (1) obstacle detection using a novel template matching approach; (2) lane detection using a modified version of the Likelihood Of Image Shape algorithm; (3) simultaneous estimation of both obstacle and lane positions by locally maximizing a combined likelihood function. Experimental results illustrating the efficacy of these components are presented. CLARK detects the position of lanes and obstacles accurately, even under significantly noisy conditions.


IEEE Transactions on Information Theory | 1993

Valid parameter space of 2-D Gaussian Markov random fields

Sridhar Lakshmanan; Haluk Derin

The valid parameter spaces of infinite- and finite-lattice (2-D noncausal) Gaussian Markov random fields (GMRFs) are investigated. For the infinite-lattice fields, the valid parameter space is shown to admit an explicit description; a procedure that yields the valid parameter space is presented. This procedure is applied to the second-order (neighborhood) 2-D GMRFs to obtain an explicit description of their valid parameter spaces. For the finite-lattice fields, it is shown that the valid parameter space does not admit a simple description; the conditions that ensure the positivity of the power spectrum are necessary, sufficient, and irreducible. The set of conditions for the infinite-lattice fields, however, serves as a good set of sufficient conditions for the finite-lattice case. The results readily extend to the class of d-D real and complex GMRFs. >


international conference on intelligent transportation systems | 1999

A system for small target detection, tracking, and classification

Randall DeFauw; Sridhar Lakshmanan; K.V. Prasad

A system is developed for small target detection, tracking and classification. The specific application of interest is an automatic headlight dimming system (AutoDim) for automotive night-time driving use. The burden of controlling the state of the vehicles headlights (high or low beam) is taken away from the driver and shifted on to the AutoDim system. AutoDims decision as to whether or not to change the state of the headlights is made from visual information obtained from a low-cost, low-resolution CMOS video camera. The targets of potential interest (i.e. light sources) are detected based on their brightness, geometric and spatial attributes. However, this also detects segments that do not correspond to the vehicular tail/headlights. Typical sources of error include street lights, reflected light, sky light, etc. To eliminate such errors, and to discriminate vehicular tail/headlight sources from other light sources, a non-parametric classifier is employed using a feature set that includes the light sources spectral distribution and temporal track record, in addition to the attributes used for detection.


IEEE Transactions on Vehicular Technology | 2007

Nonparametric Approaches for Estimating Driver Pose

Paul Watta; Sridhar Lakshmanan; Yulin Hou

To better understand driver behavior, the Federal Highway Administration and the National Highway Traffic Safety Administration have collected several thousands of hours of driver video. There is now an immediate need for devising automated procedures for analyzing the video. In this paper, we look at the problem of estimating driver pose given a video of the driver as he or she drives the vehicle. A complete system is proposed to perform feature extraction and classification of each frame. The system uses a Fisherface representation of video frames and a nearest neighbor and neural network classification scheme. Experimental results show that the system can achieve high accuracy and reliable performance.

Collaboration


Dive into the Sridhar Lakshmanan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Watta

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Ma

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haluk Derin

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Zhong

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge