Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jamuna Kanta Sing is active.

Publication


Featured researches published by Jamuna Kanta Sing.


Applied Soft Computing | 2007

Face recognition using point symmetry distance-based RBF network

Jamuna Kanta Sing; Dipak Kumar Basu; Mita Nasipuri; Mahantapas Kundu

In this paper, a face recognition technique using a radial basis function neural network (RBFNN) is presented. The centers of the hidden layer units of the RBFNN are selected by using a heuristic approach and point symmetry distance as similarity measure. The performance of the present method has been evaluated using the ATT first with no rejection criteria, and then with rejection criteria. The experimental results show that the present method achieves excellent performance, both in terms of recognition rates and learning efficiency. The average recognition rates, as obtained using 10 different permutations of 1, 3 and 5 training images per subject are 76.06, 92.61 and 97.20%, respectively, when tested without any rejection criteria. On the other hand, by imposing rejection criteria, the average recognition rates of the system become 99.34, 99.80 and 99.93%, respectively, for the above permutations of the training images. The system recognizes a face within about 22ms on a low-cost computing system with a 450MHz P-III processor, and thereby extending its capability to identify faces in interframe periods of video and in real time.


computer vision and pattern recognition | 2009

Face recognition by fusion of local and global matching scores using DS theory: An evaluation with uni-classifier and multi-classifier paradigm

Dakshina Ranjan Kisku; Massimo Tistarelli; Jamuna Kanta Sing; Phalguni Gupta

Faces are highly deformable objects which may easily change their appearance over time. Not all face areas are subject to the same variability. Therefore decoupling the information from independent areas of the face is of paramount importance to improve the robustness of any face recognition technique. This paper presents a robust face recognition technique based on the extraction and matching of SIFT features related to independent face areas. Both a global and local (as recognition from parts) matching strategy is proposed. The local strategy is based on matching individual salient facial SIFT features as connected to facial landmarks such as the eyes and the mouth. As for the global matching strategy, all SIFT features are combined together to form a single feature. In order to reduce the identification errors, the Dempster-Shafer decision theory is applied to fuse the two matching techniques. The proposed algorithms are evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition techniques also in the case of partially occluded faces or with missing information.


Applied Soft Computing | 2015

Conditional spatial fuzzy C-means clustering algorithm for segmentation of MRI images

Sudip Kumar Adhikari; Jamuna Kanta Sing; Dipak Kumar Basu; Mita Nasipuri

A conditional spatial fuzzy C-means (csFCM) clustering algorithm to improve the robustness of the conventional FCM algorithm is presented.The method incorporates conditional affects and spatial information into the membership functions.The algorithm resolves the problem of sensitivity to noise and intensity inhomogeneity in magnetic resonance imaging (MRI) data.The experimental results on four volumes of simulated and one volume of real-patient MRI brain images, each one having 51 images, support efficiency of the csFCM algorithm.The csFCM algorithm has superior performance in terms of qualitative and quantitative studies on the image segmentation results than the k-means, FCM and some other recently proposed FCM-based algorithms. The fuzzy C-means (FCM) algorithm has got significant importance due to its unsupervised form of learning and more tolerant to variations and noise as compared to other methods in medical image segmentation. In this paper, we propose a conditional spatial fuzzy C-means (csFCM) clustering algorithm to improve the robustness of the conventional FCM algorithm. This is achieved through the incorporation of conditioning effects imposed by an auxiliary (conditional) variable corresponding to each pixel, which describes a level of involvement of the pixel in the constructed clusters, and spatial information into the membership functions. The problem of sensitivity to noise and intensity inhomogeneity in magnetic resonance imaging (MRI) data is effectively reduced by incorporating local and global spatial information into a weighted membership function. The experimental results on four volumes of simulated and one volume of real-patient MRI brain images, each one having 51 images, show that the csFCM algorithm has superior performance in terms of qualitative and quantitative studies such as, cluster validity functions, segmentation accuracy, tissue segmentation accuracy and receiver operating characteristic (ROC) curve on the image segmentation results than the k-means, FCM and some other recently proposed FCM-based algorithms.


Applied Soft Computing | 2011

Face recognition by generalized two-dimensional FLD method and multi-class support vector machines

Shiladitya Chowdhury; Jamuna Kanta Sing; Dipak Kumar Basu; Mita Nasipuri

This paper presents a novel scheme for feature extraction, namely, the generalized two-dimensional Fishers linear discriminant (G-2DFLD) method and its use for face recognition using multi-class support vector machines as classifier. The G-2DFLD method is an extension of the 2DFLD method for feature extraction. Like 2DFLD method, G-2DFLD method is also based on the original 2D image matrix. However, unlike 2DFLD method, which maximizes class separability either from row or column direction, the G-2DFLD method maximizes class separability from both the row and column directions simultaneously. To realize this, two alternative Fishers criteria have been defined corresponding to row and column-wise projection directions. Unlike 2DFLD method, the principal components extracted from an image matrix in G-2DFLD method are scalars; yielding much smaller image feature matrix. The proposed G-2DFLD method was evaluated on two popular face recognition databases, the AT&T (formerly ORL) and the UMIST face databases. The experimental results using different experimental strategies show that the new G-2DFLD scheme outperforms the PCA, 2DPCA, FLD and 2DFLD schemes, not only in terms of computation times, but also for the task of face recognition using multi-class support vector machines (SVM) as classifier. The proposed method also outperforms some of the neural networks and other SVM-based methods for face recognition reported in the literature.


international conference on emerging trends in engineering and technology | 2008

Face Recognition Using Principal Component Analysis and RBF Neural Networks

Sweta Thakur; Jamuna Kanta Sing; Dipak Kumar Basu; Mita Nasipuri; Mahantapas Kundu

In this paper, an efficient method for face recognition using principal component analysis (PCA) and radial basis function (RBF) neural networks is presented. Recently, the PCA has been extensively employed for face recognition algorithms. It is one of the most popular representation methods for a face image. It not only reduces the dimensionality of the image, but also retains some of the variations in the image data. After performing the PCA, the hidden layer neurons of the RBF neural networks have been modelled by considering intra-class discriminating characteristics of the training images. This helps the RBF neural networks to acquire wide variations in the lower-dimensional input space and improves its generalization capabilities. The proposed method has been evaluated using the ATand T (formerly ORL) and UMIST face databases. Experimental results show that the proposed method has encouraging recognition performance.


ieee region 10 conference | 2003

Improved k-means algorithm in the design of RBF neural networks

Jamuna Kanta Sing; Dipak Kumar Basu; Mita Nasipuri; M. Kundu

We propose an improved version of the normal k-means clustering algorithm to select the hidden layer neurons of a radial basis function (RBF) neural network. The normal k-means algorithm has been modified to capture more knowledge about the distribution of input patterns and to take care of hyper-ellipsoidal shaped clusters. The RBF neural network with the proposed algorithm has been tested with three different machine-learning data sets. The average recognition rate of an RBF neural network over these data sets has been found to be 93.70% using the proposed improved k-means algorithm, whereas in the method using the normal k-means algorithm, the corresponding value is found to be 88.12%. Clearly, the results show that the performance of the RBF neural network using the proposed modified k-means algorithm has been improved.


international conference on advances in computational tools for engineering applications | 2009

SIFT-based ear recognition by fusion of detected keypoints from color similarity slice regions

Dakshina Ranjan Kisku; Hunny Mehrotra; Phalguni Gupta; Jamuna Kanta Sing

Ear biometric is considered as one of the most reliable and invariant biometrics characteristics in line with iris and fingerprint characteristics. In many cases, ear biometrics can be compared with face biometrics regarding many physiological and texture characteristics. In this paper, a robust and efficient ear recognition system is presented, which uses Scale Invariant Feature Transform (SIFT) as feature descriptor for structural representation of ear images. In order to make it more robust to user authentication, only the regions having color probabilities in a certain ranges are considered for invariant SIFT feature extraction, where the K-L divergence is used for keeping color consistency. Ear skin color model is formed by Gaussian mixture model and clustering the ear color pattern using vector quantization. Finally, K-L divergence is applied to the GMM framework for recording the color similarity in the specified ranges by comparing color similarity between a pair of reference model and probe ear images. After segmentation of ear images in some color slice regions, SIFT keypoints are extracted and an augmented vector of extracted SIFT features are created for matching, which is accomplished between a pair of reference model and probe ear images. The proposed technique has been tested on the IITK Ear database and the experimental results show improvements in recognition accuracy while invariant features are extracted from color slice regions to maintain the robustness of the system.


international conference on advances in pattern recognition | 2009

Multisensor Biometric Evidence Fusion for Person Authentication Using Wavelet Decomposition and Monotonic-Decreasing Graph

Dakshina Ranjan Kisku; Jamuna Kanta Sing; Massimo Tistarelli; Phalguni Gupta

This paper presents a novel biometric sensor generated evidence fusion of face and palmprint images using wavelet decomposition for personnel identity verification. The approach of biometric image fusion at sensor level refers to a process that fuses multispectral images captured at different resolutions and by different biometric sensors to acquire richer and complementary information to produce a new fused image in spatially enhanced form. When the fused image is ready for further processing, SIFT operator are then used for feature extraction and the recognition is performed by adjustable structural graph matching between a pair of fused images by searching corresponding points using recursive descent tree traversal approach. The experimental result shows the efficacy of the proposed method with 98.19% accuracy, outperforms other methods when it is compared with uni-modal face and palmprint authentication results with recognition rates 89.04% and 92.17%, respectively and when all the methods are processed in the same feature space.


ieee international conference on technologies for homeland security | 2009

Biometric sensor image fusion for identity verification: A case study with wavelet-based fusion rules graph matching

Dakshina Ranjan Kisku; Ajita Rattani; Phalguni Gupta; Jamuna Kanta Sing

Multibiometric systems have many advantages over the uni-biometric systems. However, multibiometric systems lacking in many respects, such as multimodal systems not only acquire relevant and viable information for fusion, but also acquire some irrelevant and redundant information which are associated to the feature sets or with the match score sets, and this may lead to the resultant performance to be degraded. This paper deals with a biometric authentication system that uses image fusion convention for face and palmprint images using wavelet decomposition. The proposed work uses a few selected wavelet fusion rules subject to fusion of biometric face and palmprint images at low-level. While fusion is accomplished with two high-resolution biometric images, SIFT operator is used to extract invariant features from spatially enhanced fused image. Finally, identity is verified by probabilistic relational graph with posteriori attributes matching between a pair of fused images. Matching is employed by searching corresponding feature points in both the database and query fused images using the iterative relaxation algorithm. The experimental results show that the proposed multimodal biometric system through image fusion outperforms feature level fusion methods, while all the fusion schemes are implemented in the same feature space, i.e., in the scale invariant feature space.


arXiv: Computer Vision and Pattern Recognition | 2009

Feature Level Fusion of Biometrics Cues: Human Identification with Doddington’s Caricature

Dakshina Ranjan Kisku; Phalguni Gupta; Jamuna Kanta Sing

This paper presents a multimodal biometric system of fingerprint and ear biometrics. Scale Invariant Feature Transform (SIFT) descriptor based feature sets extracted from fingerprint and ear are fused. The fused set is encoded by K-medoids partitioning approach with less number of feature points in the set. K-medoids partition the whole dataset into clusters to minimize the error between data points belonging to the clusters and its center. Reduced feature set is used to match between two biometric sets. Matching scores are generated using wolf-lamb user-dependent feature weighting scheme introduced by Doddington. The technique is tested to exhibit its robust performance.

Collaboration


Dive into the Jamuna Kanta Sing's collaboration.

Top Co-Authors

Avatar

Dakshina Ranjan Kisku

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Phalguni Gupta

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ajita Rattani

University of Missouri–Kansas City

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge