Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Argyriou is active.

Publication


Featured researches published by Andreas Argyriou.


Machine Learning | 2008

Convex multi-task feature learning

Andreas Argyriou; Theodoros Evgeniou; Massimiliano Pontil

AbstractnWe present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select—not learn—a few common variables across the tasks.n


conference on learning theory | 2005

Learning convex combinations of continuously parameterized basic kernels

Andreas Argyriou; Charles A. Micchelli; Massimiliano Pontil

We study the problem of learning a kernel which minimizes a regularization error functional such as that used in regularization networks or support vector machines. We consider this problem when the kernel is in the convex hull of basic kernels, for example, Gaussian kernels which are continuously parameterized by a compact set. We show that there always exists an optimal kernel which is the convex combination of at most m + 1 basic kernels, where m is the sample size, and provide a necessary and sufficient condition for a kernel to be optimal. The proof of our results is constructive and leads to a greedy algorithm for learning the kernel. We discuss the properties of this algorithm and present some preliminary numerical simulations.


international conference on machine learning | 2006

A DC-programming algorithm for kernel selection

Andreas Argyriou; Raphael Hauser; Charles A. Micchelli; Massimiliano Pontil

We address the problem of learning a kernel for a given supervised learning task. Our approach consists in searching within the convex hull of a prescribed set of basic kernels for one which minimizes a convex regularization functional. A unique feature of this approach compared to others in the literature is that the number of basic kernels can be infinite. We only require that they are continuously parameterized. For example, the basic kernels could be isotropic Gaussians with variance in a prescribed interval or even Gaussians parameterized by multiple continuous parameters. Our work builds upon a formulation involving a minimax optimization problem and a recently proposed greedy algorithm for learning the kernel. Although this optimization problem is not convex, it belongs to the larger class of DC (difference of convex functions) programs. Therefore, we apply recent results from DC optimization theory to create a new algorithm for learning the kernel. Our experimental results on benchmark data sets show that this algorithm outperforms a previously proposed method.


european conference on machine learning | 2008

An Algorithm for Transfer Learning in a Heterogeneous Environment

Andreas Argyriou; Andreas Maurer; Massimiliano Pontil

We consider the problem of learning in an environment of classification tasks. Tasks sampled from the environment are used to improve classification performance on future tasks. We consider situations in which the tasks can be divided into groups. Tasks within each group are related by sharing a low dimensional representation, which differs across the groups. We present an algorithm which divides the sampled tasks into groups and computes a common representation for each group. We report experiments on a synthetic and two image data sets, which show the advantage of the approach over single-task learning and a previous transfer learning method.


arXiv: Learning | 2013

On Sparsity Inducing Regularization Methods for Machine Learning

Andreas Argyriou; Luca Baldassarre; Charles A. Micchelli; Massimiliano Pontil

During the past few years there has been an explosion of interest in learning methods based on sparsity regularization. In this chapter, we discuss a general class of such methods, in which the regularizer can be expressed as the composition of a convex function ω with a linear function. This setting includes several methods such as the Group Lasso, the Fused Lasso, multi-task learning and many more. We present a general approach for solving regularization problems of this kind, under the assumption that the proximity operator of the function ω is available. Furthermore, we comment on the application of this approach to support vector machines, a technique pioneered by the groundbreaking work of Vladimir Vapnik.


neural information processing systems | 2006

Multi-Task Feature Learning

Andreas Argyriou; Theodoros Evgeniou; Massimiliano Pontil


neural information processing systems | 2007

A Spectral Regularization Framework for Multi-Task Structure Learning

Andreas Argyriou; Massimiliano Pontil; Yiming Ying; Charles A. Micchelli


Journal of Machine Learning Research | 2009

When Is There a Representer Theorem? Vector Versus Matrix Regularizers

Andreas Argyriou; Charles A. Micchelli; Massimiliano Pontil


neural information processing systems | 2005

Combining Graph Laplacians for Semi--Supervised Learning

Andreas Argyriou; Mark Herbster; Massimiliano Pontil


neural information processing systems | 2012

Sparse Prediction with the k-Support Norm

Andreas Argyriou; Rina Foygel; Nathan Srebro

Collaboration


Dive into the Andreas Argyriou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles A. Micchelli

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Luca Baldassarre

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Nathan Srebro

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean Morales

University College London

View shared research outputs
Top Co-Authors

Avatar

Mark Herbster

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge