Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajesh Parekh is active.

Publication


Featured researches published by Rajesh Parekh.


IEEE Transactions on Neural Networks | 2000

Constructive neural-network learning algorithms for pattern classification

Rajesh Parekh; Jihoon Yang; Vasant G. Honavar

Constructive learning algorithms offer an attractive approach for the incremental construction of near-minimal neural-network architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable weights in a priori fixed network architectures. Several such algorithms are proposed in the literature and shown to converge to zero classification errors (under certain assumptions) on tasks that involve learning a binary to binary mapping (i.e., classification problems involving binary-valued input attributes and two output categories). We present two constructive learning algorithms MPyramid-real and MTiling-real that extend the pyramid and tiling algorithms, respectively, for learning real to M-ary mappings (i.e., classification problems involving real-valued input attributes and multiple output classes). We prove the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems. Additionally, we show how the incorporation of a local pruning step can eliminate several redundant neurons from MTiling-real networks.


Machine Learning | 2004

Lessons and Challenges from Mining Retail E-Commerce Data

Ron Kohavi; Llew Mason; Rajesh Parekh; Zijian Zheng

The architecture of Blue Martini Softwares e-commerce suite has supported data collection, data transformation, and data mining since its inception. With clickstreams being collected at the application-server layer, high-level events being logged, and data automatically transformed into a data warehouse using meta-data, common problems plaguing data mining using weblogs (e.g., sessionization and conflating multi-sourced data) were obviated, thus allowing us to concentrate on actual data mining goals. The paper briefly reviews the architecture and discusses many lessons learned over the last four years and the challenges that still need to be addressed. The lessons and challenges are presented across two dimensions: business-level vs. technical, and throughout the data mining lifecycle stages of data collection, data warehouse construction, business intelligence, and deployment. The lessons and challenges are also widely applicable to data mining domains outside retail e-commerce.


intelligent data analysis | 1999

DistAl: An inter-pattern distance-based constructive learning algorithm

Jihoon Yang; Rajesh Parekh; Vasant G. Honavar

Multi-layer networks of threshold logic units TLU offer an attractive framework for the design of pattern classification systems. A new constructive neural network learning algorithm DistAl based on inter-pattern distance is introduced. DistAl constructs a single hidden layer of hyperspherical threshold neurons. Each neuron is designed to determine a cluster of training patterns belonging to the same class. The weights and thresholds of the hidden neurons are determined directly by comparing the inter-pattern distances of the training patterns. This offers a significant advantage over other constructive learning algorithms that use an iterative and often time consuming weight modification strategy to train individual neurons. The individual clusters represented by the hidden neurons are combined by a single output layer of threshold neurons. The speed of DistAl makes it a good candidate for datamining and knowledge acquisition from large datasets. The paper presents results of experiments using several artificial and real-world datasets. The results demonstrate that DistAl compares favorably with other learning algorithms for pattern classification.


international colloquium on grammatical inference | 1998

A Polynominal Time Incremental Algorithm for Learning DFA

Rajesh Parekh; Codrin M. Nichitiu; Vasant G. Honavar

We present an efficient incremental algorithm for learning deterministic unite state automata (DFA) from labeled examples and membership queries. This algorithm is an extension of Angluins ID procedure to an incremental framework. The learning algorithm is intermittently provided with labeled examples and has access to a knowledgeable teacher capable of answering membership queries. The learner constructs an initial hypothesis from the given set of labeled examples and the teachers responses to membership queries. If an additional example observed by the learner is inconsistent with the current hypothesis then the hypothesis is modified minimally to make it consistent with the new example. The update procedure ensures that the modified hypothesis is consistent with all examples observed thus far. The algorithm is guaranteed to converge to a minimum state DFA corresponding to the target when the set of examples observed by the learner includes a live complete set. We prove the convergence of this algorithm and analyze its time and space complexities.


national conference on artificial intelligence | 1996

An incremental interactive algorithm for regular grammar inference

Rajesh Parekh; Vasant G. Honavar

We present provably correct interactive algorithms for learning regular grammars from positive examples and membership queries. A structurally complete set of strings from a language L(G) corresponding to a target regular grammar G implicitly specifies a lattice of finite state automata (FSA) which contains a FSA MG corresponding to G. The lattice is compactly represented as a version-space and MG is identified by searching the version-space using membership queries. We explore the problem of regular grammar inference in a setting where positive examples are provided intermittently. We provide an incremental version of the algorithm along with a set of sufficient conditions for its convergence.


international symposium on neural networks | 1998

DistAl: an inter-pattern distance-based constructive learning algorithm

Jihoon Yang; Rajesh Parekh; V. Konavar

Multilayer networks of threshold logic units offer an attractive framework for the design of pattern classification systems. A new constructive neural network learning algorithm (DistAl) based on inter-pattern distance is introduced. DistAl constructs a single hidden layer of spherical threshold neurons. Each neuron is designed to exclude a cluster of training patterns belonging to the same class. The weights and thresholds of the hidden neurons are determined directly by comparing the inter-pattern distances of the training patterns. This offers a significant advantage over other constructive learning algorithms that use an iterative weight modification strategy to train individual neurons. The individual clusters are combined by a single output layer of threshold neurons. The speed of DistAl makes it a good candidate for data mining and knowledge acquisition from very large data sets. Results of experiments show that DistAl compares favorably with other neural network learning algorithms for pattern classification.


international symposium on neural networks | 1998

Constructive theory refinement in knowledge based neural networks

Rajesh Parekh; Vasant G. Honavar

Knowledge based artificial neural networks offer an approach for connectionist theory refinement. We present an algorithm for refining and extending the domain theory incorporated in a knowledge based neural network using constructive neural network learning algorithms. The initial domain theory comprising propositional rules is translated into a knowledge based network of threshold logic units (threshold neuron). The domain theory is modified by dynamically adding neurons to the existing network. A constructive neural network learning algorithm is used to add and train these additional neurons using a sequence of labeled examples. We propose a novel hybrid constructive learning algorithm based on the tiling and pyramid constructive learning algorithms that allows a knowledge based neural network to handle patterns with continuous valued attributes. Results of experiments on two non-trivial tasks (the ribosome binding site prediction and the financial advisor) show that our algorithm compares favorably with other algorithms for connectionist theory refinement both in terms of generalization accuracy and network size.


international colloquium on grammatical inference | 2000

On the Relationship between Models for Learning in Helpful Environments

Rajesh Parekh; Vasant G. Honavar

The PAC and other equivalent learning models are widely accepted models for polynomial learnability of concept classes. However, negative results abound in the PAC learning framework (concept classes such as deterministic finite state automata (DFA) are not efficiently learnable in the PAC model). The PAC model’s requirement of learnability under all conceivable distributions could be considered too stringent a restriction for practical applications. Several models for learning in more helpful environments have been proposed in the literature including: learning from example based queries [2], online learning allowing a bounded number of mistakes [14], learning with the help of teaching sets [7], learning from characteristic sets [5], and learning from simple examples [12,4]. Several concept classes that are not learnable in the standard PAC model have been shown to be learnable in these models. In this paper we identify the relationships between these different learning models. We also address the issue of unnatural collusion between the teacher and the learner that can potentially trivialize the task of learning in helpful environments.


international symposium on neural networks | 1997

MUpstart-a constructive neural network learning algorithm for multi-category pattern classification

Rajesh Parekh; Jihoon Yang; Vasant G. Honavar

Constructive learning algorithms offer an approach for dynamically constructing near-minimal neural network architectures for pattern classification tasks. Several such algorithms proposed in the literature are shown to converge to zero classification errors on finite non-contradictory datasets. However, these algorithms are restricted to two-category pattern classification and (in most cases) they require the input patterns to have binary (or bipolar) valued attributes only. We present a provably correct extension of the upstart algorithm to handle multiple output classes and real-valued pattern attributes. Results of experiments with several artificial and real-world datasets demonstrate the feasibility of this approach in practical pattern classification tasks, and also suggest several interesting directions for future research.


international symposium on neural networks | 1999

Data-driven theory refinement algorithms for bioinformatics

Jihoon Yang; Rajesh Parekh; Vasant G. Honavar; Drena Dobbs

Bioinformatics and related applications call for efficient algorithms for knowledge-intensive learning and data-driven knowledge refinement. Knowledge based artificial neural networks offer an attractive approach to extending or modifying incomplete knowledge bases or domain theories. We present results of experiments with several such algorithms for data-driven knowledge discovery and theory refinement in some simple bioinformatics applications. Results of experiments on the ribosome binding site and promoter site identification problems indicate that the performance of KBDistAl and Tiling-Pyramid algorithms compares quite favorably with those of substantially more computationally demanding techniques.

Collaboration


Dive into the Rajesh Parekh's collaboration.

Top Co-Authors

Avatar

Vasant G. Honavar

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Codrin M. Nichitiu

École normale supérieure de Lyon

View shared research outputs
Researchain Logo
Decentralizing Knowledge