Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arun Jagota is active.

Publication


Featured researches published by Arun Jagota.


international conference on artificial neural networks | 2002

New Methods for Splice Site Recognition

Sören Sonnenburg; Gunnar Rätsch; Arun Jagota; Klaus-Robert Müller

Splice sites are locations in DNA which separate protein-coding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from a labeled data set consisting of only local information around the potential splice site. Note that finding the correct position of splice sites without using global information is a rather hard task. We analyze the genomes of the nematode Caenorhabditis elegans and of humans using specially designed support vector kernels. One of the kernels is adapted from our previous work on detecting translation initiation sites in vertebrates and another uses an extension to the well-known Fisher-kernel. We find excellent performance on both data sets.


IEEE Transactions on Neural Networks | 2002

Energy function-based approaches to graph coloring

A. Di Blas; Arun Jagota; Richard Hughey

We describe an approach to optimization based on a multiple-restart quasi-Hopfield network where the only problem-specific knowledge is embedded in the energy function that the algorithm tries to minimize. We apply this method to three different variants of the graph coloring problem: the minimum coloring problem, the spanning subgraph k-coloring problem, and the induced subgraph k-coloring problem. Though Hopfield networks have been applied in the past to the minimum coloring problem, our encoding is more natural and compact than almost all previous ones. In particular, we use k-state neurons while almost all previous approaches use binary neurons. This reduces the number of connections in the network from (Nk)(2) to N(2) asymptotically and also circumvents a problem in earlier approaches, that of multiple colors being assigned to a single vertex. Experimental results show that our approach compares favorably with other algorithms, even nonneural ones specifically developed for the graph coloring problem.


Informs Journal on Computing | 1996

Some Experimental and Theoretical Results on Test Case Generators for the Maximum Clique Problem

Laura A. Sanchis; Arun Jagota

We describe and analyze test case generators for the maximum clique problem (or equivalently for the maximum independent set or vertex cover problems). The generators produce graphs with specified number of vertices and edges, and known maximum clique size. The experimental hardness of the test cases is evaluated in relation to several heuristics for the maximum clique problem, based on neural networks, and derived from the work of A. Jagota. Our results show that the hardness of the graphs produced by this method depends in a crucial way on the construction parameters; for a given edge density, challenging graphs can only be constructed using this method for a certain range of maximum clique values; the location of this range depends on the expected maximum clique size for random graphs of that density; the size of the range depends on the density of the graph. We also show that one of the algorithms, based on reinforcement learning techniques, has more success than the others at solving the test cases produced by the generators. In addition, NP-completeness reductions are presented showing that (in spite of what might be suggested by the results just mentioned) the maximum clique problem remains NP-hard even if the domain is restricted to graphs having a constant edge density and a constant ratio of the maximum clique size to the number of vertices, for almost all valid combinations of such values. Moreover, the set of all graphs produced by the generators, and having a constant ratio and edge density, is also NP-hard for almost all valid parameter combinations.


Journal of Heuristics | 2001

Adaptive, Restart, Randomized Greedy Heuristics for Maximum Clique

Arun Jagota; Laura A. Sanchis

This paper presents some adaptive restart randomized greedy heuristics for MAXIMUM CLIQUE. The algorithms are based on improvements and variations of previously-studied algorithms by the authors. Three kinds of adaptation are studied: adaptation of the initial state (AI) given to the greedy heuristic, adaptation of vertex weights (AW) on the graph, and no adaptation (NA). Two kinds of initialization of the vertex-weights are investigated: unweighted initialization (wi := 1) and degree-based initialization (wi := di where di is the degree of vertex i in the graph). Experiments are conducted on several kinds of graphs (random, structured) with six combinations: {NA, AI, and AW} × {unweighted initialization, degree-based initialization. A seventh state of the art semi-greedy algorithm, DMclique, is evaluated as a benchmark algorithm. We concentrate on the problem of finding large cliques in large, dense graphs in a relatively short amount of time. We find that the different strategies produce different effects, and that different algorithms work best on different kinds of graphs.


international symposium on neural networks | 2000

A new deterministic annealing algorithm for maximum clique

Arun Jagota; M. Pelilo; A. Rangarajan

We propose a new heuristic for approximating the maximum clique problem based on a recently introduced deterministic annealing algorithm which generalizes Waugh and Westewelts cluster-competitive net. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a linearly constrained quadratic program. Preliminary experiments on random as well as standard benchmark graphs are presented which demonstrate the validity of the approach.


Neurocomputing | 1998

Information Capacity of Binary Weights Associative Memories

Arun Jagota; Giri Narasimhan; Kenneth W. Regan

Abstract We study the amount of information stored in the fixed points of random instances of two binary weights associative memory models: the Willshaw model (WM) and the inverted neural network (INN). For these models, we show divergences between the information capacity (IC) as defined by Abu-Mostafa and Jacques, and information calculated from the standard notion of storage capacity by Palm and Grossman, respectively. We prove that the WM has asymptotically optimal IC for nearly the full range of threshold values, the INN likewise for constant threshold values, and both over all degrees of sparseness of the stored vectors. This is contrasted with the result by Palm, which required stored random vectors to be logarithmically sparse to achieve good storage capacity for the WM, and with that of Grossman, which showed that the INN has poor storage capacity for random vectors. We propose Q -state versions of the WM and the INN, and show that they retain asymptotically optimal IC while guaranteeing stable storage. By contrast, we show that the Q -state INN has poor storage capacity for random vectors. Our results indicate that it might be useful to ask analogous questions for other associative memory models. Our techniques are not limited to working with binary weights memories.


workshop on algorithms in bioinformatics | 2001

Comparing a Hidden Markov Model and a Stochastic Context-Free Grammar

Arun Jagota; Rune B. Lyngsø; Christian N. S. Pedersen

Stochastic models are commonly used in bioinformatics, e.g., hidden Markov models for modeling sequence families or stochastic context-free grammars for modeling RNA secondary structure formation. Comparing data is a common task in bioinformatics, and it is thus natural to consider how to compare stochastic models. In this paper we present the first study of the problem of comparing a hidden Markov model and a stochastic context-free grammar. We describe how to compute their co-emission--or collision--probability, i.e., the probability that they independently generate the same sequence. We also consider the related problem of finding a run through a hidden Markov model and derivation in a grammar that generate the same sequence and have maximal joint probability by a generalization of the CYK algorithm for parsing a sequence by a stochastic context-free grammar. We illustrate the methods by an experiment on RNA secondary structures.


Discrete Applied Mathematics | 2001

A generalization of maximal independent sets

Arun Jagota; Giri Narasimhan; Lcaron; ubomír Šoltés

Abstract We generalize the concept of maximal-independent set in the following way. For a nonnegative integer k we define a k - insulated set of a graph G as a subset S of its vertices such that each vertex in S is adjacent to at most k other vertices in S and each vertex not in S is adjacent to at least k +1 vertices in S . We show that it is NP-hard to approximate a maximum k -insulated set within a polynomial factor and describe a polynomial algorithm which approximates a maximum k -insulated set in an n -vertex graph to within the factor of cnk /log 2 n , for a constant c >0. We also give an O( kn 2 ) algorithm which finds an arbitrary k -insulated set.


international symposium on neural networks | 1990

Applying a Hopfield-style network to degraded text recognition

Arun Jagota

A Hopfield-style network model was developed and analyzed in detail by the author. This model has the advantage of higher storage capacity and less interference between stored memories than the classical discrete Hopfield network. The model is applied to machine printed word recognition. Words to be recognized are stored as content-addressable memories. Word images are first processed by an OCR. The network is then used to postprocess the OCR decisions


international parallel and distributed processing symposium | 2003

Neural network-based heuristic algorithms for hypergraph coloring problems with applications

Dmitri Kaznachey; Arun Jagota; Sajal K. Das

The graph coloring problem is a classic one in combinatorial optimization with a diverse set of significant applications in science and engineering. In this paper, we study several versions of this problem generalized to hypergraphs and develop solutions based on the neural network approach. We experimentally evaluate the proposed algorithms, as well as some conventional ones, on certain types of random hypergraphs. We also evaluate our algorithms on specialized hypergraphs arising in implementations of parallel data structures. The neural network algorithms turn out to be competitive with the conventional ones we study. Finally, we construct a family of hypergraphs that is hard for a greedy strong coloring algorithm, whereas our neural network solutions perform quite well.

Collaboration


Dive into the Arun Jagota's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Hughey

University of California

View shared research outputs
Top Co-Authors

Avatar

Andrea Di Blas

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giri Narasimhan

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Kenneth W. Regan

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Sajal K. Das

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Klaus-Robert Müller

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge