Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ajoy Mondal is active.

Publication


Featured researches published by Ajoy Mondal.


Applied Soft Computing | 2016

Robust global and local fuzzy energy based active contour for image segmentation

Ajoy Mondal; Susmita Ghosh; Ashish Ghosh

Graphical abstractDisplay Omitted HighlightsIt is very difficult to segment images having high intensity inhomogeneity.A global and local fuzzy energy based active contour is proposed to detect objects.Local energy is generated by using both local spatial and gray/color information.It is better for images with high intensity inhomogeneity, noise and blurred edges.To speed up the convergence, a level set based optimization is used. Though various image segmentation techniques have been developed, it is still a very challenging task to design a robust and efficient algorithm to segment (noisy, blurred or even discontinuous edged) images having high intensity inhomogeneity or non-homogeneity. In this article, a robust fuzzy energy based active contour, using both global and local information, is proposed to detect objects in a given image based on curve evolution. The local energy is generated by considering both local spatial and gray level/color information. The proposed model can better deal with images having high intensity inhomogeneity or non-homogeneity, noise and blurred boundary or discontinuous edges by incorporating local energy term in the proposed active contour energy function. The global energy term is used to avoid unsatisfactory results due to bad initialization. In this article, instead of solving the Euler-Lagrange equation, a level set based optimization is used for the convergence. We show a realization of the proposed method and demonstrate its performance (both qualitatively and quantitatively) with respect to state-of-the-art techniques on several images having such kind of artifacts. Analysis of results concludes that the proposed method can detect objects from given images in a better way than the existing ones.


Applied Soft Computing | 2014

Moving object detection using Markov Random Field and Distributed Differential Evolution

Ashish Ghosh; Ajoy Mondal; Susmita Ghosh

In this article, we present an algorithm for detecting moving objects from a given video sequence. Here, spatial and temporal segmentations are combined together to detect moving objects. In spatial segmentation, a multi-layer compound Markov Random Field (MRF) is used which models spatial, temporal, and edge attributes of image frames of a given video. Segmentation is viewed as a pixel labeling problem and is solved using the maximum a posteriori (MAP) probability estimation principle; i.e., segmentation is done by searching a labeled configuration that maximizes this probability. We have proposed using a Differential Evolution (DE) algorithm with neighborhood-based mutation (termed as Distributed Differential Evolution (DDE) algorithm) for estimating the MAP of the MRF model. A window is considered over the entire image lattice for mutation of each target vector of the DDE; thereby enhancing the speed of convergence. In case of temporal segmentation, the Change Detection Mask (CDM) is obtained by thresholding the absolute differences of the two consecutive spatially segmented image frames. The intensity/color values of the original pixels of the considered current frame are superimposed in the changed regions of the modified CDM to extract the Video Object Planes (VOPs). To test the effectiveness of the proposed algorithm, five reference and one real life video sequences are considered. Results of the proposed method are compared with four state of the art techniques and provide better spatial segmentation and better identification of the location of moving objects.


soft computing | 2016

Efficient silhouette-based contour tracking using local information

Ajoy Mondal; Susmita Ghosh; Ashish Ghosh

In this article, we present an algorithm that can efficiently track the contour extracted from silhouette of the moving object of a given video sequence using local neighborhood information and fuzzy k-nearest-neighbor classifier. To classify each unlabeled sample in the target frame, instead of considering the whole training set, a subset of it is considered depending on the amount of motion of the object between immediate previous two consecutive frames. This technique makes the classification process faster and may increase the classification accuracy. Classification of the unlabeled samples in the target frame provides object (silhouette of the object) and background (non-object) regions. Transition pixels from the non-object region to the object silhouette and vice versa are treated as the boundary or contour pixels of the object. Contour or boundary of the object is extracted by connecting the boundary pixels and the object is tracked with this contour in the target frame. We show a realization of the proposed method and demonstrate it on eight benchmark video sequences. The effectiveness of the proposed method is established by comparing it with six state of the art contour tracking techniques, both qualitatively and quantitatively.


advances in computing and communications | 2013

Efficient silhouette based contour tracking

Ajoy Mondal; Susmita Ghosh; Ashish Ghosh

In this article, we present an algorithm that can efficiently track the contour extracted from silhouette of the moving object of a given video sequence using local neighborhood information and fuzzy k-nearest-neighbor classifier. Object is represented by its silhouette as a candidate model in the candidate frame. A fuzzy k-nearest-neighbor (fuzzy k-NN) classifier is used to distinguish the object from the background. Instead of considering the whole training set, a subset of it is considered to classify each unlabeled sample in the target frame. A heuristic is suggested to generate the training subset from the corresponding neighborhood (of the candidate frame) of each unlabeled sample in the target frame, depending on the amount of motion of the object between immediate previous two consecutive frames. This technique makes the classification process faster and may increase the classification accuracy. Classification of the unlabeled samples in the target frame provides two regions: object and background. The object region represents silhouette of the object and all others represent non-object region. Transition pixels from the non-object region to the object silhouette or the object silhouette to the non-object region are treated as the boundary or contour pixels of the object. Connecting the boundary pixels, contour or boundary of the object is extracted in the target frame. Hence, the object is tracked with its contour or boundary in the target candidate frame. We show a realization of the proposed method and demonstrate it on two benchmark video sequences. The effectiveness of the proposed method is established by comparing it with two state of the art contour tracking techniques, both qualitatively and quantitatively.


ieee international conference on image information processing | 2011

Distributed differential evolution algorithm for MAP estimation of MRF model for detecting moving objects

Ajoy Mondal; Susmita Ghosh; Ashish Ghosh

In this article, spatio-temporal spatial and temporal segmentations are combined together to detect moving objects. In spatio-temporal spatial segmentation, a compound Markov Random Field (MRF) is used for modeling the image frames. Segmentation is viewed as a pixel labeling problem and is solved using Maximum a Posteriori (MAP) probability estimation principle; i.e., segmentation is achieved by searching a labeled configuration that maximizes this probability. To estimate the MAP of the MRF model, we have proposed a new Distributed Differential Evolution (DDE) algorithm where a small window is considered over the entire image lattice for mutation of each target vector of the conventional Differential Evolution (DE) algorithm. In temporal segmentation, the given video image frame is segmented into changed and unchanged regions by thresholding the absolute difference of two consecutive spatially segmented image frames. Thereafter Video Object Plane (VOP) is extracted by superimposing the intensity/ color values of original pixels of the current frame on the changed region. To test the effectiveness of the proposed algorithm, one reference video sequence is considered and results are found to be encouraging


Applied Soft Computing | 2018

Scaled and oriented object tracking using ensemble of multilayer perceptrons

Ajoy Mondal; Ashish Ghosh; Susmita Ghosh

Abstract Major challenging problems in the field of moving object tracking are to handle changing in scale and orientation, background clutter and large variation in pose with occlusion. This article presents an algorithm to track moving object under such complex environment. Here, a discriminative model based on an ensemble of multilayer perceptrons (MLPs) is proposed to detect object from its cluttered background. Orientation and enhanced scale of the detected object is estimated using binary moments. Here, the problem of object tracking is posed as a constrained optimization with respect to location, scale and orientation of the object. Two different heuristics based on support value and confidence score are proposed to reduce drift and to detect full occlusion. Three benchmark datasets are considered for the experimental purpose and the proposed algorithm attains state-of-the-art performance under various conditions.


International Journal of Computer Vision | 2017

Partially Camouflaged Object Tracking using Modified Probabilistic Neural Network and Fuzzy Energy based Active Contour

Ajoy Mondal; Susmita Ghosh; Ashish Ghosh

Various problems in object detection and tracking have attracted researchers to develop methodologies for solving these problems. Occurrence of camouflage is one of such challenges that makes object detection and tracking problems more complex. However, less attention has been given to detect and track camouflaged objects due to complexity of the problem. In this article, we propose a tracking-by-detection algorithm to detect and track camouflaged objects. To increase separability between the camouflaged object and the background, we propose to integrate features (CIELab, histogram of orientation gradients and locally adaptive ternary pattern) from multi-cue (color, shape and texture) to represent a camouflaged object. A probabilistic neural network (PNN) is modified to construct an efficient discriminative appearance model for detecting camouflaged objects in video sequences. A large number of training patterns (many could be redundant) are reduced based on motion of the object in the modified PNN. The modified PNN makes the detection process faster and also increases the detection accuracy. Due to high visual similarity among the camouflaged object and the background, the boundary of camouflaged object is not well defined (i.e., boundary may be smooth and/or discontinuous). In this context, a robust fuzzy energy based active contour model using both global and local information is proposed to extract contour (boundary) of the detected camouflaged object for tracking. We show a realization of the proposed method and demonstrate its performance (both quantitatively and qualitatively) with respect to state-of-the-art techniques on several challenging sequences. Analysis of results concludes that the proposed technique can track camouflaged (fully or partially) objects as well as objects in various complex environments in a better way as compare to the existing ones.


systems, man and cybernetics | 2016

Neural approach for object tracking in complex environment

Ajoy Mondal; Ashish Ghosh; Susmita Ghosh

In this article, we present an algorithm to track objects in complex environments like, large variations in scale and orientation, background clutters, illumination changes, pose variation and occlusion. A multilayer perceptron based discriminative appearance model is constructed to distinguish the objects from their cluttered backgrounds. Moments of the binary image are used to estimate scale and orientation of the detected object. The target in the current frame is tracked by maximizing the Bhattacharyya coefficient between the distributions of object in the target and target candidate models. Two different heuristics based on support value and relative confidence score calculated from detection result are used to reduce drift problem and to handle occlusion. We show a realization of the proposed method and demonstrate its performance with respect to state-of-the-art techniques on several challenging video sequences. Analysis of the results concludes that the proposed method can track objects in a better way compare to the existing ones.


international symposium on neural networks | 2016

Maximum Class Boundary Criterion for supervised dimensionality reduction

K. Ramachandra Murthy; Ajoy Mondal; Ashish Ghosh

Participation of class-wise noisy patterns may mislead the selection process of relevant patterns for subspace projection. And modelling between-class scatter for each class using the patterns that are nearer to the corresponding class decision boundary may improve the quality of feature generation. In this manuscript, a novel dimensionality reduction method, named Maximum Class Boundary Criterion (MCBC) is proposed. MCBC increases class separability by realizing the significant class-boundary and class-non-boundary patterns after the elimination of noisy patterns. The objective of MCBC is modeled such that the class-boundary patterns are pushed away from the corresponding class means and class-non-boundary patterns are forced towards their class means. As a result, the classification performance of the extracted MCBC features is improved. Experimental study is performed on UCI machine learning and face recognition data to highlight the performance of MCBC. The results conclude that MCBC can generate better discriminative features compared to the state-of-the-art dimensionality reduction methods.


indian conference on computer vision, graphics and image processing | 2016

Prototypes based discriminative appearance model for object tracking

Ajoy Mondal; Ashish Ghosh; Susmita Ghosh

Occlusion is one of the major challenges for object tracking in real life scenario. Various techniques in particle filter framework have been developed to solve this problem. This framework depends on two issues: motion model and observation (likelihood) model. Due to the lack of effective observation model and efficient motion model, problem of occlusion still remains unsolvable in the tracking task. In this article, an effective observation model is proposed based on confidence (classification) score provided by the developing online prototypes based discriminative appearance model. This appearance model is constructed with the prior knowledge of two classes (object and background) and tries to discriminate between three classes such as object, background and occluded part of the object. The considered composite motion model can handle both the object motion as well as scale change of the object. The proposed update mechanism is able to adapt the appearance changes during tracking. We show a realization of the proposed method and demonstrate its performance (both quantitatively and qualitatively) with respect to state-of-the-art techniques on several challenging sequences. Analysis of the results concludes that the proposed technique can track (fully or partially) occluded objects as well as objects in various complex environments in a better way as compared to the existing ones.

Collaboration


Dive into the Ajoy Mondal's collaboration.

Top Co-Authors

Avatar

Ashish Ghosh

Indian Statistical Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge