Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. Narasimha Murty is active.

Publication


Featured researches published by M. Narasimha Murty.


systems man and cybernetics | 1999

Genetic K-means algorithm

K. Krishna; M. Narasimha Murty

In this paper, we propose a novel hybrid genetic algorithm (GA) that finds a globally optimal partition of a given data into a specified number of clusters. GAs used earlier in clustering employ either an expensive crossover operator to generate valid child chromosomes from parent chromosomes or a costly fitness function or both. To circumvent these expensive operations, we hybridize GA with a classical gradient descent algorithm used in clustering, viz. K-means algorithm. Hence, the name genetic K-means algorithm (GKA). We define K-means operator, one-step of K-means algorithm, and use it in GKA as a search operator instead of crossover. We also define a biased mutation operator specific to clustering called distance-based-mutation. Using finite Markov chain theory, we prove that the GKA converges to the global optimum. It is observed in the simulations that GKA converges to the best known optimum corresponding to the given data in concurrence with the convergence result. It is also observed that GKA searches faster than some of the other evolutionary algorithms used for clustering.In this paper, we propose a novel hybrid genetic algorithm (GA) that finds a globally optimal partition of a given data into a specified number of clusters. GAs used earlier in clustering employ either an expensive crossover operator to generate valid child chromosomes from parent chromosomes or a costly fitness function or both. To circumvent these expensive operations, we hybridize GA with a classical gradient descent algorithm used in clustering, viz. K-means algorithm. Hence, the name genetic K-means algorithm (GKA). We define K-means operator, one-step of K-means algorithm, and use it in GKA as a search operator instead of crossover. We also define a biased mutation operator specific to clustering called distance-based-mutation. Using finite Markov chain theory, we prove that the GKA converges to the global optimum. It is observed in the simulations that GKA converges to the best known optimum corresponding to the given data in concurrence with the convergence result. It is also observed that GKA searches faster than some of the other evolutionary algorithms used for clustering.


Pattern Recognition Letters | 1993

A near-optimal initial seed value selection in K -means algorithm using a genetic algorithm

G. Phanendra Babu; M. Narasimha Murty

The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.


Pattern Recognition | 1994

Clustering with evolution strategies

G. Phanendra Babu; M. Narasimha Murty

Tbe applicability of evolution strategies (ESs), population based stochastic optimization techniques, to optimize clustering objective functions is explored. Clustering objective functions are categorized into centroid and non-centroid type of functions. Optimization of the centroid type of objective functions is accomplished by formulating them as functions of real-valued parameters using ESs. Both hard and fuzzy clustering objective functions are considered in this study. Applicability of ESs to discrete optimization problems is extended to optimize the non-centroid type of objective functions. As ESs are amenable to parallelization, a parallel model (master/slave model) is described in the context of the clustering problem. Results obtained for selected data sets substantiate the utility of ESs in clustering.


Pattern Recognition Letters | 2003

Tree structure for efficient data mining using rough sets

V. S. Ananthanarayana; M. Narasimha Murty; D. K. Subramanian

In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach.


Pattern Recognition | 1999

Off-line signature verification using genetically optimized weighted features

V. E. Ramesh; M. Narasimha Murty

This paper is concerned with off-line signature verification. Four different types of pattern representation schemes have been implemented, viz., geometric features, moment-based representations, envelope characteristics and tree-structured Wavelet features. The individual feature components in a representation are weighed by their pattern characterization capability using Genetic Algorithms. The conclusions of the four subsystems (each depending on a representation scheme) are combined to form a final decision on the validity of signature. Threshold-based classifiers (including the traditional confidence-interval classifier), neighbourhood classifiers and their combinations were studied. Benefits of using forged signatures for training purposes have been assessed. Experimental results show that combination of the feature-based classifiers increases verification accuracy.


Pattern Recognition Letters | 2006

Rough set based incremental clustering of interval data

S. Asharaf; M. Narasimha Murty; Shirish Krishnaj Shevade

This paper introduces a novel incremental approach to clustering interval data. The method employs rough set theory to capture the inherent uncertainty involved in cluster analysis. Our experimental results show that it produces meaningful cluster abstractions for interval data at a minimal computational expense.


Pattern Recognition | 2003

An adaptive rough fuzzy single pass algorithm for clustering large data sets

S. Asharaf; M. Narasimha Murty

Cluster analysis has been widely applied in many areas such as data mining, geographical data processing, medicine, classification of statistical findings in social studies and so on. Most ofthese domains deal with massive collections of data. Hence the methods to handle them must be efficient both in terms of the number of data set scans and memory usage.


Pattern Recognition | 2005

Rapid and brief communication: Rough support vector clustering

S. Asharaf; Shirish Krishnaj Shevade; M. Narasimha Murty

In this paper a novel kernel-based soft clustering method is proposed. This method incorporates rough set theoretic flavour in support vector clustering paradigm to achieve soft clustering. Empirical studies show that this method can find soft clusters having arbitrary shapes.


international symposium on neural networks | 2002

SSVM: a simple SVM algorithm

S.V.M. Vishwanathan; M. Narasimha Murty

We present a fast iterative algorithm for identifying the support vectors of a given set of points. Our algorithm works by maintaining a candidate support vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set, we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increase or prune the candidate support vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|SI|/sup 2/) in the average case, where |S| is the size of the support vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.


Pattern Recognition | 1980

A computationally efficient technique for data-clustering

M. Narasimha Murty; G. Krishna

A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.

Collaboration


Dive into the M. Narasimha Murty's collaboration.

Top Co-Authors

Avatar

Shalabh Bhatnagar

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Ambedkar Dukkipati

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

V. Susheela Devi

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

G. Krishna

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

D. K. Subramanian

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

V. Sridhar

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

B. Shekar

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

P. Viswanath

Indian Institute of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge