Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anastasios Kyrillidis is active.

Publication


Featured researches published by Anastasios Kyrillidis.


IEEE Signal Processing Letters | 2012

Multi-Way Compressed Sensing for Sparse Low-Rank Tensors

Nicholas D. Sidiropoulos; Anastasios Kyrillidis

For linear models, compressed sensing theory and methods enable recovery of sparse signals of interest from few measurements-in the order of the number of nonzero entries as opposed to the length of the signal of interest. Results of similar flavor have more recently emerged for bilinear models, but no results are available for multilinear models of tensor data. In this contribution, we consider compressed sensing for sparse and low-rank tensors. More specifically, we consider low-rank tensors synthesized as sums of outer products of sparse loading vectors, and a special class of linear dimensionality-reducing transformations that reduce each mode individually. We prove interesting “oracle” properties showing that it is possible to identify the uncompressed sparse loadings directly from the compressed tensor data. The proofs naturally suggest a two-step recovery process: fitting a low-rank model in compressed domain, followed by per-mode decompression. This two-step process is also appealing from a computational complexity and memory capacity point of view, especially for big tensor datasets.


Journal of Mathematical Imaging and Vision | 2014

Matrix Recipes for Hard Thresholding Methods

Anastasios Kyrillidis; Volkan Cevher

In this paper, we present and analyze a new set of low-rank recovery algorithms for linear inverse problems within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic ingredients for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we study acceleration schemes via memory-based techniques and randomized, ϵ-approximate matrix projections to decrease the computational costs in the recovery process. For most of the configurations, we present theoretical analysis that guarantees convergence under mild problem conditions. Simulation results demonstrate notable performance improvements as compared to state-of-the-art algorithms both in terms of reconstruction accuracy and computational complexity.


ieee international workshop on computational advances in multi sensor adaptive processing | 2011

Recipes on hard thresholding methods

Anastasios Kyrillidis; Volkan Cevher

We present and analyze a new set of sparse recovery algorithms within the class of hard thresholding methods. We provide optimal strategies on how to set up these algorithms via basic “ingredients” for different configurations to achieve complexity vs. accuracy tradeoffs. Simulation results demonstrate notable performance improvements compared to state-of-the-art algorithms both in terms of data reconstruction and computational complexity.


international symposium on information theory | 2012

Combinatorial selection and least absolute shrinkage via the Clash algorithm

Anastasios Kyrillidis; Volkan Cevher

The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the ℓ2-data error objective and the ℓ1-norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely the center of attention over the last decade in order to improve learning performance. To this end, we alter the selection process of LASSO to explicitly leverage combinatorial sparsity models (CSMs) via the combinatorial selection and least absolute shrinkage (Clash) operator. We provide concrete guidelines how to leverage combinatorial constraints within Clash, and characterize CLASHs guarantees as a function of the set restricted isometry constants of the sensing matrix. Finally, our experimental results show that Clash can outperform both LASSO and model-based compressive sensing in sparse estimation.


ieee signal processing workshop on statistical signal processing | 2012

MATRIX ALPS: Accelerated low rank and sparse matrix reconstruction

Anastasios Kyrillidis; Volkan Cevher

We propose MATRIX ALPS for recovering a sparse plus low-rank decomposition of a matrix given its corrupted and incomplete linear measurements. Our approach is a first-order projected gradient method over non-convex sets, and it exploits a well-known memory-based acceleration technique. We theoretically characterize the convergence properties of MATRIX ALPS using the stable embedding properties of the linear measurement operator. We then numerically illustrate that our algorithm outperforms the existing convex as well as non-convex state-of-the-art algorithms in computational efficiency without sacrificing stability.


IEEE Transactions on Information Theory | 2016

Group-Sparse Model Selection: Hardness and Relaxations

Luca Baldassarre; Nirav Bhan; Volkan Cevher; Anastasios Kyrillidis; Siddhartha Satpathi

Group-based sparsity models are instrumental in linear and non-linear regression problems. The main premise of these models is the recovery of “interpretable” signals through the identification of their constituent groups, which can also provably translate in substantial savings in the number of measurements for linear models in compressive sensing. In this paper, we establish a combinatorial framework for group-model selection problems and highlight the underlying tractability issues. In particular, we show that the group-model selection problem is equivalent to the well-known NP-hard weighted maximum coverage problem. Leveraging a graph-based understanding of group models, we describe group structures that enable correct model selection in polynomial time via dynamic programming. Furthermore, we show that popular group structures can be explained by linear inequalities involving totally unimodular matrices, which afford other polynomial time algorithms based on relaxations. We also present a generalization of the group model that allows for within group sparsity, which can be used to model hierarchical sparsity. Finally, we study the Pareto frontier between approximation error and sparsity budget of group-sparse approximations for two tractable models, among which the tree sparsity model, and illustrate selection and computation tradeoffs between our framework and the existing convex relaxations.


Siam Journal on Optimization | 2014

An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization

Quoc Tran-Dinh; Anastasios Kyrillidis; Volkan Cevher

Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constraint set affords a self-concordant barrier. We provide a new joint treatment of proximal and self-concordant barrier concepts and illustrate that such problems can be efficiently solved, without the need of lifting the problem dimensions, as in disciplined convex optimization approach. We propose an inexact path-following algorithmic framework and theoretically characterize the worst-case analytical complexity of this framework when the proximal subproblems are solved inexactly. To show the merits of our framework, we apply its instances to both synthetic and real-world applications, where it shows advantages over standard interior point methods. As a byproduct, we describe how our framework can obtain points on t...


arXiv: Information Theory | 2015

Structured Sparsity: Discrete and Convex Approaches

Anastasios Kyrillidis; Luca Baldassarre; Marwa El Halabi; Quoc Tran-Dinh; Volkan Cevher

During the past decades, sparsity has been shown to be of significant importance in fields such as compression, signal sampling and analysis, machine learning, and optimization. In fact, most natural data can be sparsely represented, i.e., a small set of coefficients is sufficient to describe the data using an appropriate basis. Sparsity is also used to enhance interpretability in real-life applications, where the relevant information therein typically resides in a low dimensional space. However, the true underlying structure of many signal processing and machine learning problems is often more sophisticated than sparsity alone. In practice, what makes applications differ is the existence of sparsity patterns among coefficients. In order to better understand the impact of such structured sparsity patterns, in this chapter we review some realistic sparsity models and unify their convex and non-convex treatments. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and hierarchical models. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications in image processing, neuronal signal processing, and confocal imaging.


conference on information and knowledge management | 2014

Improving Co-Cluster Quality with Application to Product Recommendations

Michail Vlachos; Francesco Fusco; Charalambos Mavroforakis; Anastasios Kyrillidis; Vassilios G. Vassiliadis

Businesses store an ever increasing amount of historical customer sales data. Given the availability of such information, it is advantageous to analyze past sales, both for revealing dominant buying patterns, and for providing more targeted recommendations to clients. In this context, co-clustering has proved to be an important data-modeling primitive for revealing latent connections between two sets of entities, such as customers and products. In this work, we introduce a new algorithm for co-clustering that is both scalable and highly resilient to noise. Our method is inspired by k-Means and agglomerative hierarchical clustering approaches: (i) first it searches for elementary co-clustering structures and (ii) then combines them into a better, more compact, solution. The algorithm is flexible as it does not require an explicit number of co-clusters as input, and is directly applicable on large data graphs. We apply our methodology on real sales data to analyze and visualize the connections between clients and products. We showcase a real deployment of the system, and how it has been used for driving a recommendation engine. Finally, we demonstrate that the new methodology can discover co-clusters of better quality and relevance than state-of-the-art co-clustering techniques.


IEEE Transactions on Communications | 2014

Fixed-Rank Rayleigh Quotient Maximization by an MPSK Sequence

Anastasios Kyrillidis; George N. Karystinos

Certain optimization problems in communication systems, such as limited-feedback constant-envelope beamforming or noncoherent M-ary phase-shift keying (MPSK) sequence detection, result in the maximization of a fixed-rank positive semidefinite quadratic form over the MPSK alphabet. This form is a special case of the Rayleigh quotient of a matrix and, in general, its maximization by an MPSK sequence is \mathcal{NP}-hard. However, if the rank of the matrix is not a function of its size, then the optimal solution can be computed with polynomial complexity in the matrix size. In this work, we develop a new technique to efficiently solve this problem by utilizing auxiliary continuous-valued angles and partitioning the resulting continuous space of solutions into a polynomial-size set of regions, each of which corresponds to a distinct MPSK sequence. The sequence that maximizes the Rayleigh quotient is shown to belong to this polynomial-size set of sequences, thus efficiently reducing the size of the feasible set from exponential to polynomial. Based on this analysis, we also develop an algorithm that constructs this set in polynomial time and show that it is fully parallelizable, memory efficient, and rank scalable. The proposed algorithm compares favorably with other solvers for this problem that have appeared recently in the literature.

Collaboration


Dive into the Anastasios Kyrillidis's collaboration.

Top Co-Authors

Avatar

Volkan Cevher

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Constantine Caramanis

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Sujay Sanghavi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Dohyung Park

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Stephen Becker

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Srinadh Bhojanapalli

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Luca Baldassarre

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Quoc Tran-Dinh

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Rabeeh Karimi Mahabadi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Quoc Tran Dinh

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge