Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohan Malkani is active.

Publication


Featured researches published by Mohan Malkani.


southeastern symposium on system theory | 1998

LabVIEW program design for on-line data acquisition and predictive maintenance

Sujatha Srinivasan; M. Bodruzzaman; Amir Shirkhodaie; Mohan Malkani

The online design for data acquisition and predictive maintenance for a fan-motor system using the graphical programming language, LabVIEW is presented. The data set were created for different faults under varying rpm levels from the synthetically generated fault data and the normal base line data acquired from the fan-motor system. The data were processed and the extracted features were fed to a two layer backpropagation neural network. The design is to be implemented on an online basis.


southeastern symposium on system theory | 1998

Feature extraction using wavelet transform for neural network based image classification

A.N. Sarlashkar; M. Bodruzzaman; Mohan Malkani

In order to design an image classification or recognition scheme which should have a robustness in classification approaching as close as possible to that of the human biological recognition system, two factors must be taken into account: it must be able to automatically extract global properties of the images; and it must be able to filter out the variations such as scaling and rotation in the images. Wavelet transforms of the images with high frequency components truncated off seem to be able to meet both of these conditions. This is because low frequency components are spread in the time domain and can be treated as global property while high frequency components, concentrated in time domain, can be discarded. Information at different resolution scales provided by wavelet features lead to highly discriminating, robust classifiers. Wavelets can examine data at different scales and frequencies. The theory behind the wavelets and their suitability for classification is discussed. The authors discuss extraction and how the wavelet transform is implemented. Finally, results of feature extraction are given.


Journal of Nanotechnology | 2014

Nanosensor Data Processor in Quantum-Dot Cellular Automata

Fenghui Yao; Mohamed Saleh Zein-Sabatto; Guifeng Shao; Mohammad Bodruzzaman; Mohan Malkani

Quantum-dot cellular automata (QCA) is an attractive nanotechnology with the potential alterative to CMOS technology. QCA provides an interesting paradigm for faster speed, smaller size, and lower power consumption in comparison to transistor-based technology, in both communication and computation. This paper describes the design of a 4-bit multifunction nanosensor data processor (NSDP). The functions of NSDP contain (i) sending the preprocessed raw data to high-level processor, (ii) counting the number of the active majority gates, and (iii) generating the approximate sigmoid function. The whole system is designed and simulated with several different input data.


Proceedings of SPIE | 1993

Speaker recognition using neural network and adaptive wavelet transform

Mohammad Bodruzzaman; Xingkang Li; Kah Eng Kuah; Lamar Crowder; Mohan Malkani; Harold H. Szu; Brian A. Telfer

The same word uttered by different people has different waveforms. It has also been observed that the same word uttered by the same person has different waveform at different times. This difference can be characterized by some time domain dilation effects in the waveform. In our experiment a set of words was selected and each word was uttered eight times by five different speakers. The objective of this work is to extract a wavelet basis function for the speech data generated by each individual speaker. The wavelet filter coefficients are then used as a feature set and fed into a neural network-based speaker recognition system. This is an attempt to cascade a wavelet network (wavenet) and a neural network (neural-net) for feature extraction and classification respectively and applied for speaker recognition. The results show very high promise and good prospects to couple a wavelet network and neural networks.


Eurasip Journal on Image and Video Processing | 2010

Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector

Fenghui Yao; Guifeng Shao; Ali Sekmen; Mohan Malkani

This paper presents a robust approach to detect multiple moving targets from aerial infrared (IR) image sequences. The proposed novel method is based on dynamic Gabor filter and dynamic Gaussian detector. First, the motion induced by the airborne platform is modeled by parametric affine transformation and the IR video is stabilized by eliminating the background motion. A set of feature points are extracted and they are categorized into inliers and outliers. The inliers are used to estimate affine transformation parameters, and the outliers are used to localize moving targets. Then, a dynamic Gabor filter is employed to enhance the difference images for more accurate detection and localization of moving targets. The Gabor filters orientation is dynamically changed according to the orientation of optical flows. Next, the specular highlights generated by the dynamic Gabor filter are detected. The outliers and specular highlights are fused to indentify the moving targets. If a specular highlight lies in an outlier cluster, it corresponds to a target; otherwise, the dynamic Gaussian detector is employed to determine whether the specular highlight corresponds to a target. The detection speed is approximate 2 frames per second, which meets the real-time requirement of many target tracking systems.


international conference on information and automation | 2008

Multiple moving target detection, tracking, and recognition from a moving observer

Fenghui Yao; Ali Sekmen; Mohan Malkani

This paper describes an algorithm for multiple moving targets detection, tracking and recognition from a moving observer. When the camera is placed on a moving observer, the whole background of the scene appears to be moving and the actual motion of the targets must be distinguished from the background motion. To do this, an affine motion model between consecutive frames is estimated, and then moving targets can be extracted. Next, the target tracking employs a similarity measure which is based on the joint feature-spatial space. At last, the target recognition is performed by matching moving targets with target database. The average processing time is 680 ms per frame, which corresponds to a processing rate of 1.5 frames per second. The algorithm was tested on the Vivid datasets provided the Air Force Research Laboratory and experimental results show that this method is efficient and fast for real-time application.


international conference on information and automation | 2008

Aerial image registration based on joint feature-spatial spaces, curve and template matching

Guifeng Shao; Fenghui Yao; Mohan Malkani

Image registration has wide applications in remote sensing, medicine, cartography, and computer vision. This paper describes a method for aerial image registration, which is based on joint feature-spatial spaces, curve matching, and template matching. The entire algorithm consists of (i) segmentation and region representation, (ii) region matching based on joint feature-spatial space, (iii) transformation model estimation based on curve matching and template matching, and (iv) image transformation. Experiment results show the effectiveness of this algorithm.


Robotica | 2009

Smart video surveillance for airborne platforms

Ali Sekmen; Fenghui Yao; Mohan Malkani

This paper describes real-time computer vision algorithms for detection, identification, and tracking of moving targets in video streams generated by a moving airborne platform. Moving platforms cause instabilities in image acquisition due to factors such as disturbances and the ego-motion of the camera that distorts the actual motion of the moving targets. When the camera is mounted on a moving observer, the entire scene (background and targets) appears to be moving and the actual motion of the targets must be separated from the background motion. The motion of the airborne platform is modeled as affine transformation and its parameters are estimated using corresponding feature sets in consecutive images. After motion is compensated, the platform is considered as stationary and moving targets are detected accordingly. A number of tracking algorithms including particle filters, mean-shift, and connected component were implemented and compared. A cascaded boosted classifier with Haar wavelet feature extraction for moving target classification was developed and integrated with the recognition system that uses joint-feature spatial distribution. The integrated smart video surveillance system has been successfully tested using the Vivid Datasets provided by the Air Force Research Laboratory. The experimental results show that system can operate in real time and successfully detect, track, and identify multiple targets in the presence of partial occlusion.


2009 IEEE Symposium on Computational Intelligence for Multimedia Signal and Vision Processing | 2009

Image registration for sequence of visual images captured by UAV

Matthew I. McCartney; Saleh Zein-Sabatto; Mohan Malkani

Unmanned Aerial Vehicles (UAVs) are regularly outfitted with payloads that include high resolution surveillance cameras. These surveillance systems have provided the military with the opportunity to monitor battlefields and remote terrain, carryout reconnaissance missions and track targets all from distant ground stations without endangering UAV operators. As with any remote sensing technology there are technical challenges that exist. The problem often arises where large sequences of images containing significant redundancies are being sent to the ground station processing systems and operators. Consequently, the ground station systems can become “weighed down” and operators can become overwhelmed. In addition, the decision making process of detection and recognition algorithms could be rendered ineffective because of the limited visual field of view of individual image frames. In this research image registration software is designed and implemented to integrate a sequence of visual images based on the modified Scale-Invariant Feature Transform (SIFT) algorithm. Since computational efficiency is a critical issue in any real-time interactive system, a modified version of the SIFT algorithm is devised and utilized in this work. Implementation and testing results of the developed software are obtained from real data collected from aerial footage and a collaborative camera network.


Proceedings of SPIE | 2010

Sensor agnostics for networked MAV applications

Atindra K. Mitra; Miguel Gates; Chris Barber; Thomas Goodwin; Rastko R. Selmic; Raúl Ordóñez; Ali Sekman; Mohan Malkani

A number of potential advantages associated with a new concept denoted as Sensor Agnostic Networks are discussed. For this particular paper, the primary focus is on integrated wireless networks that contain one or more MAVs (Micro Unmanned Aerial Vehicle). The development and presentation includes several approaches to analysis and design of Sensor Agnostic Networks based on the assumption of canonically structured architectures that are comprised of lowcost wireless sensor node technologies. A logical development is provided that motivates the potential adaptation of distributed low-cost sensor networks that leverage state-of-the-art wireless technologies and are specifically designed with pre-determined hooks, or facets, in-place that allow for quick and efficient sensor swaps between cost-low RF Sensors, EO Sensors, and Chem/Bio Sensors. All of the sample design synthesis procedures provided within this paper conform to the structural low-cost electronic wireless network architectural constraints adopted for our new approach to generalized sensing applications via the conscious integration of Sensor Agnostic capabilities.

Collaboration


Dive into the Mohan Malkani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Bodruzzaman

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Fenghui Yao

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Ali Sekmen

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Atindra K. Mitra

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

G. Yuen

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Amir Shirkhodaie

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Guifeng Shao

Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge