Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luca Baldassarre is active.

Publication


Featured researches published by Luca Baldassarre.


Siam Journal on Optimization | 2013

Accelerated and Inexact Forward-Backward Algorithms

Silvia Villa; Saverio Salzo; Luca Baldassarre; Alessandro Verri

We propose a convergence analysis of accelerated forward-backward splitting methods for composite function minimization, when the proximity operator is not available in closed form, and can only be computed up to a certain precision. We prove that the


international workshop on pattern recognition in neuroimaging | 2012

Structured Sparsity Models for Brain Decoding from fMRI Data

Luca Baldassarre; Janaina Mourão-Miranda; Massimiliano Pontil

1/k^2


IEEE Signal Processing Magazine | 2014

Convexity in Source Separation : Models, geometry, and algorithms

Michael B. McCoy; Volkan Cevher; Quoc Tran Dinh; Afsaneh Asaei; Luca Baldassarre

convergence rate for the function values can be achieved if the admissible errors are of a certain type and satisfy a sufficiently fast decay condition. Our analysis is based on the machinery of estimate sequences first introduced by Nesterov for the study of accelerated gradient descent algorithms. Furthermore, we give a global complexity analysis, taking into account the cost of computing admissible approximations of the proximal point. An experimental analysis is also presented.


Machine Learning | 2012

Multi-output learning via spectral filtering

Luca Baldassarre; Lorenzo Rosasco; Annalisa Barla; Alessandro Verri

Structured sparsity methods have been recently proposed that allow to incorporate additional spatial and temporal information for estimating models for decoding mental states from fMRI data. These methods carry the promise of being more interpretable than simpler Lasso or Elastic Net methods. However, despite sparsity has often been advocated as leading to more interpretable models, we show that by itself sparsity and also structured sparsity could lead to unstable models. We present an extension of the Total Variation method and assess several other structured sparsity models on accuracy, sparsity and stability. Our results indicate that structured sparsity via the Sparse Total Variation can mitigate some of the instability inherent in simpler sparse methods, but more research is required to build methods that can reliably infer relevant activation patterns from fMRI data.


IEEE Transactions on Information Theory | 2016

Group-Sparse Model Selection: Hardness and Relaxations

Luca Baldassarre; Nirav Bhan; Volkan Cevher; Anastasios Kyrillidis; Siddhartha Satpathi

Source separation, or demixing, is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems.


arXiv: Information Theory | 2015

Structured Sparsity: Discrete and Convex Approaches

Anastasios Kyrillidis; Luca Baldassarre; Marwa El Halabi; Quoc Tran-Dinh; Volkan Cevher

In this paper we study a class of regularized kernel methods for multi-output learning which are based on filtering the spectrum of the kernel matrix. The considered methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector-valued extensions of L2 boosting and other iterative schemes. Computational properties are discussed for various examples of kernels for vector-valued functions and the benefits of iterative techniques are illustrated. Generalizing previous results for the scalar case, we show a finite sample bound for the excess risk of the obtained estimator, which allows to prove consistency both for regression and multi-category classification. Finally, we present some promising results of the proposed algorithms on artificial and real data.


european conference on machine learning | 2010

Vector field learning via spectral filtering

Luca Baldassarre; Lorenzo Rosasco; Annalisa Barla; Alessandro Verri

Group-based sparsity models are instrumental in linear and non-linear regression problems. The main premise of these models is the recovery of “interpretable” signals through the identification of their constituent groups, which can also provably translate in substantial savings in the number of measurements for linear models in compressive sensing. In this paper, we establish a combinatorial framework for group-model selection problems and highlight the underlying tractability issues. In particular, we show that the group-model selection problem is equivalent to the well-known NP-hard weighted maximum coverage problem. Leveraging a graph-based understanding of group models, we describe group structures that enable correct model selection in polynomial time via dynamic programming. Furthermore, we show that popular group structures can be explained by linear inequalities involving totally unimodular matrices, which afford other polynomial time algorithms based on relaxations. We also present a generalization of the group model that allows for within group sparsity, which can be used to model hierarchical sparsity. Finally, we study the Pareto frontier between approximation error and sparsity budget of group-sparse approximations for two tractable models, among which the tree sparsity model, and illustrate selection and computation tradeoffs between our framework and the existing convex relaxations.


symposium on discrete algorithms | 2014

Model-based sketching and recovery with expanders

Bubacarr Bah; Luca Baldassarre; Volkan Cevher

During the past decades, sparsity has been shown to be of significant importance in fields such as compression, signal sampling and analysis, machine learning, and optimization. In fact, most natural data can be sparsely represented, i.e., a small set of coefficients is sufficient to describe the data using an appropriate basis. Sparsity is also used to enhance interpretability in real-life applications, where the relevant information therein typically resides in a low dimensional space. However, the true underlying structure of many signal processing and machine learning problems is often more sophisticated than sparsity alone. In practice, what makes applications differ is the existence of sparsity patterns among coefficients. In order to better understand the impact of such structured sparsity patterns, in this chapter we review some realistic sparsity models and unify their convex and non-convex treatments. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and hierarchical models. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications in image processing, neuronal signal processing, and confocal imaging.


IEEE Journal of Selected Topics in Signal Processing | 2016

Learning-Based Compressive Subsampling

Luca Baldassarre; Yen-Huan Li; Jonathan Scarlett; Baran Gözcü; Ilija Bogunovic; Volkan Cevher

In this paper we present and study a new class of regularized kernel methods for learning vector fields, which are based on filtering the spectrum of the kernel matrix. These methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector valued extensions of L2-Boosting. Our theoretical and experimental analysis shows that spectral filters that yield iterative algorithms, such as L2-Boosting, are much faster than Tikhonov regularization and attain the same prediction performances. Finite sample bounds for the different filters can be derived in a common framework and highlight different theoretical properties of the methods. The theory of vector valued reproducing kernel Hilbert space is a key tool in our study.


international conference on image analysis and processing | 2009

Towards a Theoretical Framework for Learning Multi-modal Patterns for Embodied Agents

Nicoletta Noceti; Barbara Caputo; Claudio Castellini; Luca Baldassarre; Annalisa Barla; Lorenzo Rosasco; Francesca Odone; Giulio Sandini

Linear sketching and recovery of sparse vectors with randomly constructed sparse matrices has numerous applications in several areas, including compressive sensing, data stream computing, graph sketching, and combinatorial group testing. This paper considers the same problem with the added twist that the sparse coefficients of the unknown vector exhibit further correlations as determined by a known sparsity model. We prove that exploiting model-based sparsity in recovery provably reduces the sketch size without sacrificing recovery quality. In this context, we present the model-expander iterative hard thresholding algorithm for recovering model sparse signals from linear sketches obtained via sparse adjacency matrices of expander graphs with rigorous performance guarantees. The main computational cost of our algorithm depends on the difficulty of projecting onto the model-sparse set. For the tree and group-based sparsity models we describe in this paper, such projections can be obtained in linear time. Finally, we provide numerical experiments to illustrate the theoretical results in action.

Collaboration


Dive into the Luca Baldassarre's collaboration.

Top Co-Authors

Avatar

Volkan Cevher

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cosimo Aprile

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Yusuf Leblebici

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur Gretton

University College London

View shared research outputs
Top Co-Authors

Avatar

Guy Lever

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anastasios Kyrillidis

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Baran Gözcü

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge