Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chandra Shekhar Dhir is active.

Publication


Featured researches published by Chandra Shekhar Dhir.


ieee international conference on information acquisition | 2007

Efficient feature selection based on information gain criterion for face recognition

Chandra Shekhar Dhir; Nadeem Iqbal; Soo-Young Lee

Feature selection based on information gain criterion is proposed for improvements in classification performance of face recognition tasks. A comparison of information gain with fisher criterion is presented for two different image database. Information gain criterion gives slight performance improvement when ICA based features are used for recognition of facial images with different illumination. For less number of selected features, information gain criterion gives superior performance compared to fisher criterion. However, the performance of information gain and fisher method for feature selection shows similar performance when the database has classes with different illusion, expressions, and occlusions.


Neurocomputing | 2006

A filter bank approach to independent component analysis for convolved mixtures

Hyung-Min Park; Chandra Shekhar Dhir; Sang-Hoon Oh; Soo-Young Lee

Abstract We present a filter bank approach to perform independent component analysis (ICA) for convolved mixtures. Input signals are split into subband signals and subsampled. A simplified network performs ICA on the subsampled signals, and finally independent components are synthesized. The proposed approach achieves superior performance than the frequency domain approach and faster convergence with less computational complexity than the time domain approach. Furthermore, it requires shorter unmixing filter length and less computational complexity than other filter bank approaches by designing efficient filter banks. Also, a method is proposed to resolve the permutation and scaling problems of the filter bank approach.


IEEE Transactions on Neural Networks | 2011

Discriminant Independent Component Analysis

Chandra Shekhar Dhir; Soo-Young Lee

A conventional linear model based on Negentropy maximization extracts statistically independent latent variables which may not be optimal to give a discriminant model with good classification performance. In this paper, a single-stage linear semisupervised extraction of discriminative independent features is proposed. Discriminant independent component analysis (dICA) presents a framework of linearly projecting multivariate data to a lower dimension where the features are maximally discriminant with minimal redundancy. The optimization problem is formulated as the maximization of linear summation of Negentropy and weighted functional measure of classification. Motivated by independence among extracted features, Fisher linear discriminant is used as the functional measure of classification. Experimental results show improved classification performance when dICA features are used for recognition tasks in comparison to unsupervised (principal component analysis and ICA) and supervised feature extraction techniques like linear discriminant analysis (LDA), conditional ICA, and those based on information theoretic learning approaches. dICA features also give reduced data reconstruction error in comparison to LDA and ICA method based on Negentropy maximization.


Knowledge and Information Systems | 2012

Extraction of independent discriminant features for data with asymmetric distribution

Chandra Shekhar Dhir; Jaehyung Lee; Soo-Young Lee

Standard unsupervised linear feature extraction methods find orthonormal (PCA) or statistically independent (ICA) latent variables that are good for data representation. These representative features may not be optimal for the classification tasks, thus requiring a search of linear projections that can give a good discriminative model. A semi-supervised linear feature extraction method, namely dICA, had recently been proposed which jointly maximizes the Fisher linear discriminant (FLD) and negentropy of the extracted features [Dhir and Lee in Discriminant independent component analysis. In: Proceedings of the international conference intelligent data engineering and automated learning, LNCS 5788:219–225 (Full paper is submitted to IEEE Trans. NN) 2009]. Motivated by the independence and unit covariance of the extracted dICA features, maximizing the determinant of between-class scatter of the features matrix is theoretically the same as the maximization of FLD. This also reduces the computational complexity of the algorithm. In this paper, we concentrate on text databases that follow inherent exponential distribution. Approximation and the maximization of negentropy for data with asymmetric distribution is discussed. Experiments on the text categorization problem show improvements in classification performance and data reconstruction.


international conference on neural information processing | 2008

Hybrid feature selection: combining fisher criterion and mutual information for efficient feature selection

Chandra Shekhar Dhir; Soo-Young Lee

Low dimensional representation of multivariate data using unsupervised feature extraction is combined with a hybrid feature selection method to improve classification performance of recognition tasks. The proposed hybrid feature selector is applied to the union of feature subspaces selected by Fisher criterion and feature-class mutual information (MI). It scores each feature as a linear weighted sum of its interclass MI and Fisher criterion score. Proposed method efficiently selects features with higher class discrimination in comparison to feature-class MI, Fisher criterion or unsupervised selection using variance; thus, resulting in much improved recognition performance. In addition, the paper also highlights the use of MI between a feature and class as a computationally economical and optimal feature selector for statistically independent features.


international conference on neural information processing | 2004

Permutation correction of filter bank ICA using static channel characteristics

Chandra Shekhar Dhir; Hyung Min Park; Soo-Young Lee

This paper exploits static channel characteristics to provide a precise solution to the permutation problem in filter bank approach to Independent Component Analysis (ICA). The filter bank approach combines the high accuracy of time domain ICA and the computational efficiency of frequency domain ICA. Decimation in each sub-band helps in better formulation of the directivity patterns. The nulls of the directivity patterns are dependent on the location of the source signals and this property is used for resolving the permutation problem. The experimental results, show a good behavior with reduced computational complexity and do not require non-stationarity of the signals.


intelligent data engineering and automated learning | 2009

Discriminant independent component analysis

Chandra Shekhar Dhir; Soo-Young Lee

A conventional linear model based on Negentropy maximization extracts statistically independent latent variables which may not be optimal to give a discriminant model with good classification performance. In this paper, a single-stage linear semisupervised extraction of discriminative independent features is proposed. Discriminant independent component analysis (dICA) presents a framework of linearly projecting multivariate data to a lower dimension where the features are maximally discriminant with minimal redundancy. The optimization problem is formulated as the maximization of linear summation of Negentropy and weighted functional measure of classification. Motivated by independence among extracted features, Fisher linear discriminant is used as the functional measure of classification. Experimental results show improved classification performance when dICA features are used for recognition tasks in comparison to unsupervised (principal component analysis and ICA) and supervised feature extraction techniques like linear discriminant analysis (LDA), conditional ICA, and those based on information theoretic learning approaches. dICA features also give reduced data reconstruction error in comparison to LDA and ICA method based on Negentropy maximization.


IEEE Signal Processing Letters | 2007

Directionally Constrained Filterbank ICA

Chandra Shekhar Dhir; Hyung-Min Park; Soo-Young Lee

A modification is proposed to the independent component analysis (ICA)-based filterbank approach in consideration to its structural similarity with binaural auditory model of sound source localization. The estimated sound locations provide an additional cue to the learning algorithm, which is utilized for initialization and imposition of directional constraints on the subband separation networks. Directionally constrained filterbank ICA (DC-FBICA) gives faster convergence and improves separation performance for noisy mixtures having significant spectral overlap among the convolved mixture and the corrupting noise. However, only slight improvement in separation performance is observed when the additive noise is a low frequency noise, although faster convergence is still observed.


international conference on neural information processing | 2009

Search-In-Synchrony: Personalizing Web Search with Cognitive User Profile Model

Chandra Shekhar Dhir; Soo-Young Lee

This study concentrates on adapting the user profile model (UPM) based on individuals continuous interaction with preferred search engines. UPM re-ranks the retrieved information from World Wide Web (WWW) to provide effective personalization for a given search query. The temporal adaptation of UPM is considered as a one-to-one socio-interaction between the dynamics of WWW and cognitive information seeking behavior of the user. The dynamics of WWW and consensus relevant ranking of information is a collaborative effect of inter-connected users, which makes it difficult to analyze in-parts. The proposed system is named as Search-in-Synchrony and a preliminary study is done on user group with background in computational neuroscience. Human-agent interaction (HAI) can implicitly model these dynamics. Hence, a primary attempt to converge the two fields is highlighted - HAI and statistically learned UPM to incorporate cognitive abilities to search agents.


Proceedings of SPIE | 2009

Information-Theoretic Feature Extraction and Selection for Robust Classification

Chandra Shekhar Dhir; Soo-Young Lee

Classification performance of recognition tasks can be improved by selection of highly discriminative features from the low-dimensional linear representation of data. High-dimensional multivariate data can be represented in lower dimensions by unsupervised feature extraction techniques which attempts to remove the redundancy in the data and/or resolve the multivariate prediction problems. These extracted low-dimensional features of raw data may not ensure good class discrimination, therefore, supervised feature selection methods motivated by information-theoretic approaches can improve the recognition performance with lesser number of features. Proposed hybrid feature selection methods efficiently selects features with higher class discrimination in comparison to feature-class mutual information (MI), Fisher criterion or unsupervised selection using variance; thus, resulting in much improved recognition performance. Feature-class MI criterion and hybrid feature selection methods are computationally scalable and optimal selectors for statistically independent features.

Collaboration


Dive into the Chandra Shekhar Dhir's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sang-Hoon Oh

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge