Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where nan Jayadeva is active.

Publication


Featured researches published by nan Jayadeva.


Neurocomputing | 2011

Letters: Reduced twin support vector regression

Mittul Singh; Jivitej S. Chadha; Puneet Ahuja; Jayadeva; Suresh Chandra

We propose the reduced twin support vector regressor (RTSVR) that uses the notion of rectangular kernels to obtain significant improvements in execution time over the twin support vector regressor (TSVR), thus facilitating its application to larger sized datasets.


Swarm Intelligence | 2013

Ants find the shortest path: a mathematical proof

Jayadeva; Sameena Shah; Amit Bhaya; Ravi Kothari; Suresh Chandra

In the most basic application of Ant Colony Optimization (ACO), a set of artificial ants find the shortest path between a source and a destination. Ants deposit pheromone on paths they take, preferring paths that have more pheromone on them. Since shorter paths are traversed faster, more pheromone accumulates on them in a given time, attracting more ants and leading to reinforcement of the pheromone trail on shorter paths. This is a positive feedback process that can also cause trails to persist on longer paths, even when a shorter path becomes available. To counteract this persistence on a longer path, ACO algorithms employ remedial measures, such as using negative feedback in the form of uniform evaporation on all paths. Obtaining high performance in ACO algorithms typically requires fine tuning several parameters that govern pheromone deposition and removal. This paper proposes a new ACO algorithm, called EigenAnt, for finding the shortest path between a source and a destination, based on selective pheromone removal that occurs only on the path that is actually chosen for each trip. We prove that the shortest path is the only stable equilibrium for EigenAnt, which means that it is maintained for arbitrary initial pheromone concentrations on paths, and even when path lengths change with time. The EigenAnt algorithm uses only two parameters and does not require them to be finely tuned. Simulations that illustrate these properties are provided.


Neurocomputing | 2015

Learning a hyperplane classifier by minimizing an exact bound on the VC dimension1

Jayadeva

The VC dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization. While SVMs produce state-of-the-art learning performance, it is well known that the VC dimension of a SVM can be unbounded; despite good results in practice, there is no guarantee of good generalization. In this paper, we show how to learn a hyperplane classifier by minimizing an exact, or ? bound on its VC dimension. The proposed approach, termed as the Minimal Complexity Machine (MCM), involves solving a simple linear programming problem. Experimental results show, that on a number of benchmark datasets, the proposed approach learns classifiers with error rates much less than conventional SVMs, while often using fewer support vectors. On many benchmark datasets, the number of support vectors is less than one-tenth the number used by SVMs, indicating that the MCM does indeed learn simpler representations. HighlightsWe learn a hyperplane classifier by minimizing an exact bound on its VC dimension.A fractional programming problem is formulated and reduced to a LP problem.Linear and kernel versions of the approach are explored.The approach, called the Minimal Complexity Machine, generalizes better than SVMs.On numerous benchmark datasets, the MCM uses far fewer support vectors than SVMs.


Neurocomputing | 2012

Letters: Using Sequential Unconstrained Minimization Techniques to simplify SVM solvers

Sachindra Joshi; Jayadeva; Ganesh Ramakrishnan; Suresh Chandra

In this paper, we apply Sequential Unconstrained Minimization Techniques (SUMTs) to the classical formulations of both the classical L1 norm SVM and the least squares SVM. We show that each can be solved as a sequence of unconstrained optimization problems with only box constraints. We propose relaxed SVM and relaxed LSSVM formulations that correspond to a single problem in the corresponding SUMT sequence. We also propose a SMO like algorithm to solve the relaxed formulations that works by updating individual Lagrange multipliers. The methods yield comparable or better results on large benchmark datasets than classical SVM and LSSVM formulations, at substantially higher speeds.


Swarm Intelligence | 2010

Trail formation in ants. A generalized Polya urn process

Sameena Shah; Ravi Kothari; Jayadeva; Suresh Chandra

Faced with a choice of paths, an ant chooses a path with a higher concentration of pheromone. Subsequently, it drops pheromone on the path chosen. The reinforcement of the pheromone-following behavior favors the selection of an initially discovered path as the preferred path. This may cause a long path to emerge as the preferred path, were it discovered earlier than a shorter path. However, the shortness of the shorter path offsets some of the pheromone accumulated on the initially discovered longer path. In this paper, we model the trail formation behavior as a generalized Polya urn process. For k equal length paths, we give the distribution of pheromone at any time and highlight its sole dependence on the initial pheromone concentrations on paths. Additionally, we propose a method to incorporate the lengths of paths in the urn process and derive how the pheromone distribution alters on its inclusion. Analytically, we show that it is possible, under certain conditions, to reverse the initial bias that may be present in favor of paths that were discovered prior to the discovery of more efficient (shorter) paths. This addresses the Plasticity–Stability dilemma for ants, by laying out the conditions under which the system will remain stable or become plastic and change the path. Finally, we validate our analysis and results using simulations.


pattern recognition and machine intelligence | 2005

Fuzzy proximal support vector classification via generalized eigenvalues

Jayadeva; Reshma Khemchandani; Suresh Chandra

In this paper, we propose a fuzzy extension to proximal support vector classification via generalized eigenvalues. Here, a fuzzy membership value is assigned to each pattern, and points are classified by assigning them to the nearest of two non parallel planes that are close to their respective classes. The algorithm is simple as the solution requires solving a generalized eigenvalue problem as compared to SVMs, where the classifier is obtained by solving a quadratic programming problem. The approach can be used to obtain an improved classification when one has an estimate of the fuzziness of samples in either class.


Neurocomputing | 2016

Learning a hyperplane regressor through a tight bound on the VC dimension

Jayadeva; Suresh Chandra; Sanjit S. Batra; Siddarth Sabharwal

In this paper, we show how to learn a hyperplane regressor by minimizing a tight or ? bound on its VC dimension. While minimizing the VC dimension with respect to the defining variables is an ill posed and intractable problem, we propose a smooth, continuous, and differentiable function for a tight bound. Minimizing a tight bound yields the Minimal Complexity Machine (MCM) Regressor, and involves solving a simple linear programming problem. Experimental results show that on a number of benchmark datasets, the proposed approach yields regressors with error rates much lower than those obtained with conventional SVM regresssors, while often using fewer support vectors. On some benchmark datasets, the number of support vectors is less than one-tenth the number used by SVMs, indicating that the MCM does indeed learn simpler representations.


international joint conference on neural network | 2006

Regularized Least Squares Twin SVR for the Simultaneous Learning of a Function and its Derivative

Jayadeva; Reshma Khemchandani; Suresh Chandra

In a recent publication, Lazaro et al. addressed the problem of simultaneously approximating a function and its derivative using support vector machines. In this paper, we propose a new approach termed as regularized least squares twin support vector regression, for the simultaneous learning of a function and its derivatives. The regressor is obtained by solving one of two related support vector machine-type problems, each of which is of a smaller size than the one obtained in Lazaros approach. The proposed algorithm is simple and fast, as no quadratic programming problem needs to be solved. Effectively, only the solution of a pair of linear systems of equations is needed.


Neurocomputing | 2017

Sparse short-term time series forecasting models via minimum model complexity

Pawas Gupta; Sanjit S. Batra; Jayadeva

Abstract Time series forecasting is of fundamental importance in big data analysis. The prediction of noisy, non-stationary, and chaotic time series demands good generalization from small amounts of data. Vapnik showed that the total risk is dependent on both, empirical error as well as model complexity, where the latter may be measured in terms of the Vapnik–Chervonenkis (VC) dimension. In other words, good generalization requires minimizing model complexity. The recently proposed Minimal Complexity Machine (MCM) has been shown to minimize a tight bound on the VC dimension, and has further been extended to Minimal Complexity Machine Regression (MCMR). In this paper, we present an original approach based on the MCM regressor, which builds sparse and accurate models for short-term time series forecasting. Results on a number of datasets establish that the proposed approach is superior to a number of state-of-the-art methods, and yields sparse models. These sparse models are able to extract only the most important information present in the data sets, thereby achieving high accuracy. Sparsity in time series forecasting models is also important in reducing the evaluation time. This assumes importance when the models need to be evaluated in real time, such as when they are used as part of trading flows.


pattern recognition and machine intelligence | 2009

Kernel Optimization Using a Generalized Eigenvalue Approach

Jayadeva; Sameena Shah; Suresh Chandra

There is no single generic kernel that suits all estimation tasks. Kernels that are learnt from the data are known to yield better classification. The coefficients of the optimal kernel that maximizes the class separability in the empirical feature space had been previously obtained by a gradient-based procedure. In this paper, we show how these coefficients can be learnt from the data by simply solving a generalized eigenvalue problem. Our approach yields a significant reduction in classification errors on selected UCI benchmarks.

Collaboration


Dive into the nan Jayadeva's collaboration.

Top Co-Authors

Avatar

Suresh Chandra

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Sameena Shah

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Reshma Khemchandani

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Sanjit S. Batra

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Amit Bhaya

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Ganesh Ramakrishnan

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Jivitej S. Chadha

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Mittul Singh

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar

Pawas Gupta

Indian Institute of Technology Delhi

View shared research outputs
Researchain Logo
Decentralizing Knowledge