Srikant Chari
University of Memphis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Srikant Chari.
Sensors | 2008
David J. Russomanno; Srikant Chari; Carl E. Halford
This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles using a sparse detector array comprised of only sixteen sensing elements deployed in a vertical configuration. The prototype sensor was built to collect silhouettes for a variety of objects and to evaluate several algorithms for classifying the data obtained from the sensor into two classes: human versus non-human. Initial tests show that the classification of individually sensed objects into two classes can be achieved with accuracy greater than ninety-nine percent (99%) with a subset of the sixteen detectors using a representative dataset consisting of 512 signatures. The prototype also includes a Webservice interface such that the sensor can be tasked in a network-centric environment. The sensor appears to be a low-cost alternative to traditional, high-resolution focal plane array imaging sensors for some applications. After a power optimization study, appropriate packaging, and testing with more extensive datasets, the sensor may be a good candidate for deployment in vast geographic regions for a myriad of intelligent electronic fence and persistent surveillance applications, including perimeter security scenarios.
IEEE Sensors Journal | 2010
David J. Russomanno; Srikant Chari; Eddie L. Jacobs; Carl E. Halford
A proof-of-concept, active near-IR sensor coupled with a classification algorithm that uses the Mahalanobis distance has been shown to be a feasible approach for discriminating among humans, animals, and vehicles for intelligent electronic fence applications. Analysis shows that only a sparse vertical array of detectors is required to sense minimal features from moving objects for reliable discrimination.
Proceedings of SPIE | 2009
Srikant Chari; Carl E. Halford; Eddie L. Jacobs; Forrest Smith; Jeremy B. Brown; David J. Russomanno
This paper presents initial object profile classification results using range and elevation independent features from a simulated infrared profiling sensor. The passive infrared profiling sensor was simulated using a LWIR camera. A field data collection effort to yield profiles of humans and animals is reported. Range and elevation independent features based on height and width of the objects were extracted from profiles. The profile features were then used to train and test four classification algorithms to classify objects as humans or animals. The performance of Naïve Bayesian (NB), Naïve Bayesian with Linear Discriminant Analysis (LDA+NB), K-Nearest Neighbors (K-NN), and Support Vector Machines (SVM) are compared based on their classification accuracy. Results indicate that for our data set SVM and (LDA+NB) are capable of providing classification rates as high as 98.5%. For perimeter security applications where misclassification of humans as animals (true negatives) needs to be avoided, SVM and NB provide true negative rates of 0% while maintaining overall classification rates of over 95%.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Srikant Chari; Carl E. Halford; Eddie L. Jacobs
Human target identification performance based on target silhouettes is measured and compared to that of complete targets. The target silhouette identification performance of automated region based and contour based shape identification algorithms are also compared. The region based algorithms of interest are Zernike Moment Descriptor (ZMD), Geometric Moment Descriptor (GMD), and Grid Descriptor (GD) while the contour based algorithms considered are Fourier Descriptor (FD), Multiscale Fourier Descriptor (MFD), and Curvature Scale Space Descriptor (CS). The results from the human perception experiments indicate that at high levels of degradation, human identification of target based on silhouettes is better than that of complete targets. The shape recognition algorithm comparison shows that GD performs best, very closely followed by ZMD. In general region based shape algorithms perform better that contour based shape algorithms.
Proceedings of SPIE, the International Society for Optical Engineering | 2005
Srikant Chari; Jonathan D. Fanning; S. M. Salem; Aaron L. Robinson; Carl E. Halford
This study determines the effectiveness of a number of image fusion algorithms through the use of the following image metrics: mutual information, fusion quality index, weighted fusion quality index, edge-dependent fusion quality index and Mannos-Sakrison’s filter. The results obtained from this study provide objective comparisons between the algorithms. It is postulated that multi-spectral sensors enhance the probability of target discrimination through the additional information available from the multiple bands. The results indicate that more information is present in the fused image than either single band image. The image quality metrics quantify the benefits of fusion of MWIR and LWIR imagery.
Electro-Optical and Infrared Systems: Technology and Applications VI | 2009
Eddie L. Jacobs; Srikant Chari; Carl E. Halford; Harry McClellan
It has been shown that useful classifications can be made with a sensor that detects the shape of moving objects. This type of sensor has been referred to as a profiling sensor. In this research, two configurations of pyroelectric detectors are considered for use in a profiling sensor, a linear array and a circular array. The linear array produces crude images representing the shape of objects moving through the field of view. The circular array produces a temporal motion vector. A simulation of the output of each detector configuration is created and used to generate simulated profiles. The simulation is performed by convolving the pyroelectric detector response with images derived from calibrated thermal infrared video sequences. Profiles derived from these simulations are then used to train and test classification algorithms. Classification algorithms examined in this study include a naive Bayesian (NB) classifier and Linear discriminant analysis (LDA). Each classification algorithm assumes a three class problem where profiles are classified as either human, animal, or vehicle. Simulation results indicate that these systems can reliably classify outputs from these types of sensors. These types of sensors can be used in applications involving border or perimeter security.
Optical Engineering | 2014
Srikant Chari; Eddie L. Jacobs; Divya Choudhary
Abstract. This paper presents a proof of concept sensor system based on a linear array of pyroelectric detectors for recognition of moving objects. The utility of this prototype sensor is demonstrated by its use in trail monitoring and perimeter protection applications for classifying humans against animals with object motion transverse to the field of view of the sensor array. Data acquisition using the system was performed under varied terrains and using a wide variety of animals and humans. With the objective of eventually porting the algorithms onto a low resource computational platform, simple signal processing, feature extraction, and classification techniques are used. The object recognition algorithm uses a combination of geometrical and texture features to provide limited insensitivity to range and speed. Analysis of system performance shows its effectiveness in discriminating humans and animals with high classification accuracy.
Proceedings of SPIE, the International Society for Optical Engineering | 2010
William E. White; Jeremy B. Brown; Srikant Chari; Eddie L. Jacobs
Pyroelectric linear arrays can be used to generate profiles of targets. Simulations have shown that generated profiles can be used to classify human and animal targets. A pyroelectric array system was used to collect data and classify targets as either human or non-human in real time. The pyroelectric array system consists of a 128-element Dias 128LTI pyroelectric linear array, an F/0.86 germanium lens, and an 18F4550 pic microcontroller for A/D conversion and communication. The classifier used for object recognition was trained using data collected in petting zoos and tested using data collected at the US-Mexico border in Arizona.
Digital wireless communications. Conference | 2003
Srikant Chari
IEEE 802.11g WLANs operating at 2.4 GHz face interference from Bluetooth which also uses the same frequency. IEEE 802.11g is an orthogonal frequency divison multiplexing (OFDM) based WLAN standard. Adaptive subcarrier selection (ASuS) involves using feedback from the receiver to dynamically allocate subcarriers for OFDM transmission. This paper proposes a method to avoid Bluetooth interference using ASuS. By adaptively choosing subcarriers for OFDM transmission, the frequencies used by Bluetooth can be avoided. Power level deviations in small groups of contiguous subcarriers with respect to other subcarriers in an OFDM symbol can be used as an indication of Bluetooth interference. Simulations show that as compared to the conventional OFDM technique, adaptive subcarrier selection results in significant reduction in the packet error rate.
Proceedings of SPIE | 2011
Jeremy B. Brown; Srikant Chari; Eddie L. Jacobs
Profiling sensor systems have been shown to be effective for detecting and classifying humans against animals. A profiling sensor with a 360 horizontal field of view was used to generate profiles of humans and animals for classification. The sensor system contains a long wave infrared camera focused on a smooth conical mirror to provide a 360 degree field of view. Human and animal targets were detected at 30 meters and an approximate height to width ratio was extracted for each target. Targets were tracked for multiple frames in order to segment targets from background. The average height to width ratio was used as a single feature for classification. The Mahalanobis distance was calculated for each target in the single feature space to provide classification results.