Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pannagadatta K. Shivaswamy is active.

Publication


Featured researches published by Pannagadatta K. Shivaswamy.


knowledge discovery and data mining | 2010

Multi-task learning for boosting with application to web search ranking

Olivier Chapelle; Pannagadatta K. Shivaswamy; Kilian Q. Weinberger; Ya Zhang; Belle L. Tseng

In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing the specifics of each learning task with task-specific parameters and the commonalities between them through shared parameters. This enables implicit data sharing and regularization. We evaluate our learning method on web-search ranking data sets from several countries. Here, multitask learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.


international conference on data mining | 2007

A Support Vector Approach to Censored Targets

Pannagadatta K. Shivaswamy; Wei Chu; Martin Jansche

Censored targets, such as the time to events in survival analysis, can generally be represented by intervals on the real line. In this paper, we propose a novel support vector technique (named SVCR) for regression on censored targets. SVCR inherits the strengths of support vector methods, such as a globally optimal solution by convex programming, fast training speed and strong generalization capacity. In contrast to ranking approaches to survival analysis, our approach is able not only to achieve superior ordering performance, but also to predict the survival time very well. Experiments show a significant performance improvement when the majority of the training data is censored. Experimental results on several survival analysis datasets demonstrate that SVCR is very competitive against classical survival analysis models.


Machine Learning | 2011

Boosted multi-task learning

Olivier Chapelle; Pannagadatta K. Shivaswamy; Kilian Q. Weinberger; Ya Zhang; Belle L. Tseng

In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing their commonalities through shared parameters and their differences with task-specific ones. This enables implicit data sharing and regularization. Our algorithm is derived using the relationship between ℓ1-regularization and boosting. We evaluate our learning method on web-search ranking data sets from several countries. Here, multi-task learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Further, the proposed method obtains state-of-the-art results on a publicly available multi-task dataset. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.


international conference on machine learning | 2006

Permutation invariant SVMs

Pannagadatta K. Shivaswamy; Tony Jebara

We extend Support Vector Machines to input spaces that are sets by ensuring that the classifier is invariant to permutations of sub-elements within each input. Such permutations include reordering of scalars in an input vector, re-orderings of tuples in an input matrix or re-orderings of general objects (in Hilbert spaces) within a set as well. This approach induces permutational invariance in the classifier which can then be directly applied to unusual set-based representations of data. The permutation invariant Support Vector Machine alternates the Hungarian method for maximum weight matching within the maximum margin learning procedure. We effectively estimate and apply permutations to the input data points to maximize classification margin while minimizing data radius. This procedure has a strong theoretical justification via well established error probability bounds. Experiments are shown on character recognition, 3D object recognition and various UCI datasets.


european conference on machine learning | 2010

Laplacian spectrum learning

Pannagadatta K. Shivaswamy; Tony Jebara

The eigenspectrum of a graph Laplacian encodes smoothness information over the graph. A natural approach to learning involves transforming the spectrum of a graph Laplacian to obtain a kernel. While manual exploration of the spectrum is conceivable, non-parametric learning methods that adjust the Laplacians spectrum promise better performance. For instance, adjusting the graph Laplacian using kernel target alignment (KTA) yields better performance when an SVM is trained on the resulting kernel. KTA relies on a simple surrogate criterion to choose the kernel; the obtained kernel is then fed to a large margin classification algorithm. In this paper, we propose novel formulations that jointly optimize relative margin and the spectrum of a kernel defined via Laplacian eigenmaps. The large relative margin case is in fact a strict generalization of the large margin case. The proposed methods show significant empirical advantage over numerous other competing methods.


international conference on machine learning and applications | 2009

Structured Prediction with Relative Margin

Pannagadatta K. Shivaswamy; Tony Jebara

In structured prediction problems, outputs are not confined to binary labels; they are often complex objects such as sequences, trees, or alignments. Support Vector Machine (SVM) methods have been successfully extended to such prediction problems. However, recent developments in large margin methods show that higher order information can be exploited for even better generalization. This article first points out a shortcoming of the SVM approach for the structured prediction; an efficient formulation is then presented to overcome the problem. The proposed algorithm exploits the fact that both the minimum and the maximum of quantities of interest are often efficiently computable even though quantities such as mean, median and variance may not be. The resulting formulation produces state-of-the-art performance on sequence learning problems. Dramatic improvements are also seen on multi-class problems.


Journal of Machine Learning Research | 2006

Second Order Cone Programming Approaches for Handling Missing and Uncertain Data

Pannagadatta K. Shivaswamy; Chiranjib Bhattacharyya; Alexander J. Smola


neural information processing systems | 2004

A Second Order Cone programming Formulation for Classifying Missing Data

Chiranjib Bhattacharyya; Pannagadatta K. Shivaswamy; Alexander J. Smola


Journal of Machine Learning Research | 2010

Maximum Relative Margin and Data-Dependent Regularization

Pannagadatta K. Shivaswamy; Tony Jebara


international conference on machine learning | 2012

Online Structured Prediction via Coactive Learning

Pannagadatta K. Shivaswamy

Collaboration


Dive into the Pannagadatta K. Shivaswamy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ya Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Martin Jansche

Association for Computing Machinery

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge