Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neelima Shrikhande is active.

Publication


Featured researches published by Neelima Shrikhande.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

Surface orientation from a projected grid

Neelima Shrikhande; George C. Stockman

Two simple methods are given for obtaining the surface shape using a projected grid. After the camera is calibrated to the 3-D workspace, the only input date needed for the computation of surface normals are grid intersect points in a single 2-D image. The first method performs nonlinear computations based on the distortion of the lengths of the grid edges and does not require a full calibration matrix. The second method requires that a full parallel projection model of the imaging is available, which enables it to compute 3-D normals using simple linear computations. The linear method performed better overall in the experiments, but both methods produced normals within 4-8 degrees of known 3-D directions. These methods appear to be superior to methods based on shape-from-shading because the results are comparable, yet the equipment setup is simpler and the processing is not very sensitive to object reflectance. >


Pattern Recognition Letters | 1994

Approximate orthogonal distance regression method for fitting quadric surfaces to range data

Xingping Cao; Neelima Shrikhande; Gongzhu Hu

Abstract Fitting surfaces to 3-D data is one of the basic methods of surface description for 3-D vision. Most techniques of surface fitting proposed in the literature are “least-squares”-based that rarely produce satisfactory results if noise level is very high or if the data points are sampled from a small area. A new approach is presented in this paper that minimizes the mean squared approximate orthogonal distances with linearization using Newtons iteration method. This approach usually yields a good fit and the algorithm is reliable and efficient for real applications. Results are reported for synthetic data and several examples of real range data. Experimental results demonstrate that the approximate orthogonal distance performs better than the least squares based methods.


Resonance | 1997

Internet and the world wide web

Neelima Shrikhande

This article presents a brief introduction to the various features available on the Internet. Different applications available on the Net are divided into types of services and a brief description of each is provided. There is of course much more available on the Net. Much of the information needed to understand and further explore the Net is available on line. This article provides a starting point for furthersurfing.


technical symposium on computer science education | 1984

A survey of compiler courses

Neelima Shrikhande

This paper reports the results of a survey done by the author in Winter 1984. Several schools were surveyed regarding their compiler courses. Results about textbooks, source languages, programming languages, prerequisites among other things are described. A summary of results is given. A brief description of our plans for this course is included.


Intelligent Robots and Computer Vision XII: Algorithms and Techniques | 1993

Extraction of edge-based and region-based features for object recognition

Benjamin Coutts; Srinivas Ravi; Gongzhu Hu; Neelima Shrikhande

One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.


Proceedings of SPIE | 2014

Discrete and continuous curvature computation for real data

Dirk Colbry; Neelima Shrikhande

This paper describes two methods for estimating the minimum and maximum curvatures for a 3D surface and compares the computational efficiency of these approaches on 3D sensor data. The classical method of Least Square Fitting (LSF) finds an approximation of a cubic polynomial fit for the local surface around the point of interest P and uses the coefficients to compute curvatures. The Discrete Differential Geometry (DDG) algorithm approximates a triangulation of the surface around P and calculates the angle deficit at P as an estimate of the curvatures. The accuracy and speed of both algorithms are compared by applying them to synthetic and real data sets with sampling neighborhoods of varying sizes. Our results indicate that the LSF and DDG methods produce comparable results for curvature estimations but the DDG method performs two orders of magnitude faster, on average. However, the DDG algorithm is more susceptible to noise because it does not smooth the data as well as the LSF method. In applications where it is not necessary for the curvatures to be precise (such as estimating anchor point locations for face recognition) the DDG method yields similar results to the LSF method while performing much more efficiently.


Intelligent Robots and Computer Vision XXV: Algorithms, Techniques, and Active Vision | 2007

Hand gesture recognition by analysis of codons

Poornima Ramachandra; Neelima Shrikhande

The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.


Intelligent Robots and Computer Vision XXIII: Algorithms, Techniques, and Active Vision | 2005

Image retrieval based on local grey-level invariants

Eva Bordeaux; Neelima Shrikhande

During past decades, the enormous growth of image archives has significantly increased the demand for research efforts aimed at efficiently finding specific images within large databases. This paper investigates matching of images of buildings, architectural designs, blueprints and sketches. Their geometrical constrains lead to the proposed approach: the use of local grey-level invariants based on internal contours of the object. The problem involves three key phases: object recognition in image data, matching two images and searching the database of images. The emphasis of this paper is on object recognition based on internal contours of image data. In her masters thesis, M.M. Kulkarni described a technique for image retrieval by contour analysis implemented on external contours of an object in an image data. This is used to define the category of a building (tower, dome, flat, etc). Integration of these results with local grey-level invariant analysis creates a more robust image retrieval system. Thus, the best match result is the intersection of the results of contour analysis and grey-level invariants analysis. Experiments conducted for the database of architectural buildings have shown robustness w.r.t. to image rotation, translation, small view-point variations, partial visibility and extraneous features. The recognition rate is above 99% for a variety of tested images taken under different conditions.


Intelligent Robots and Computer Vision XX: Algorithms, Techniques, and Active Vision | 2001

Matching shape descriptions of objects

Neelima Shrikhande; Madhuri Kulkarni

A model of an object is an image consisting of features of the object. The input is a gray scale image from which features are computed. In his doctoral thesis J. L. Chen used a model based approach for object recognition. His method is based on Rosins work for extraction of parts. Both model and scene features are contour based properties. Properties of each part such as area, compactness, convexity, etc., are computed and used to match the scene image to the model. This paper extends the algorithm in several directions. The contours are improved using two passes over the initial input image. The notion of internal part or base of an object is introduced and used to normalize the part areas. Insignificant parts are merged with neighboring parts to provide a better segmentation of the scene. Interpretation trees are used to match scene to object. The algorithm is tested on simple hand drawn images and also images of buildings obtained from architectural databases.


Resonance | 1999

Can computers see

Neelima Shrikhande

This article presents a brief introduction to the problem of computer vision. Computers, using cameras and other sensors as input devices record digital images in its memory. The computer programs must interpret these images to come up with a description of the scene. The computer vision problem can be stated simply as follows: Given a two-dimensional image, infer the objects that produced it, including their shapes, locations, colors and sizes. The concepts of low level, mid level image processing and high-level image understanding are presented. Various application areas including satellite imaging, assembly line manufacturing, handwriting and face recognition are discussed.

Collaboration


Dive into the Neelima Shrikhande's collaboration.

Top Co-Authors

Avatar

Gongzhu Hu

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Sripriya Ramaswamy

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Xingping Cao

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Benjamin Coutts

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Bryan D. Mielke

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Chris Stanek

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Dirk Colbry

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Eva Bordeaux

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim Getzinger

Central Michigan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge