Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Konstantinos Slavakis is active.

Publication


Featured researches published by Konstantinos Slavakis.


IEEE Signal Processing Magazine | 2011

Adaptive Learning in a World of Projections

Sergios Theodoridis; Konstantinos Slavakis; Isao Yamada

This article presents a general tool for convexly constrained parameter/function estimation both for classification and regression tasks, in a timeadaptive setting and in (infinite dimensional) reproducing kernel Hilbert spaces (RKHS). The thematical framework is that of the set theoretic estimation formulation and the classical projections onto convex sets (POCS) theory. However, in contrast to the classical POCS methodology, which assumes a finite number of convex sets, our method builds upon our recent extension of the theory, which considers an infinite number of convex sets. Such a context is necessary to cope with the adaptive setting rationale, where data arrive sequentially. This articles goal is to review the advances that have taken place in this area over the years and present them, in simple geometric arguments, as an integral part and natural evolution of the classical POCS methodology. The structure of the resulting algorithms is such that it allows extension to general RKHS. In this perspective, two very powerful techniques, convex optimization and (implicit) mapping to RKHS, are combined, which provide a framework for a unifying treatment of linear and nonlinear modeling of both classification and regression tasks. Typical signal processing problems, such as filtering, smoothing, equalization, and beamforming, fall under this common umbrella. The methodology allows for the incorporation of a set of convex constraints, which encode a priori information. Convexity, rather than differentiability, is the only prerequisite for adopting error measures that quantify the models fit against a set of training data points. Moreover, the complexity per iteration step remains linear with respect to the number of unknown parameters. The potential of the theory is demonstrated via numerical simulations for two typical problems; adaptive equalization and adaptive robust beamforming.


IEEE Transactions on Signal Processing | 2011

Online Sparse System Identification and Signal Reconstruction Using Projections Onto Weighted

Yannis Kopsinis; Konstantinos Slavakis; Sergios Theodoridis

This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies “data mismatch.” Sparsity is imposed by the introduction of appropriately designed weighted ℓ1 balls and the related projection operator is also derived. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted ℓ1 balls. The resulting scheme exhibits linear dependence, with respect to the unknown systems order, on the number of multiplications/additions and an O(Llog2L) dependence on sorting operations, where L is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the Least-Absolute Shrinkage and Selection Operator (LASSO) algorithm and two very recently developed adaptive sparse schemes that fuse arguments from the LMS/RLS adaptation mechanisms with those imposed by the lasso rational.


IEEE Transactions on Signal Processing | 2002

\ell_{1}

Isao Yamada; Konstantinos Slavakis; Kenyu Yamada

This paper presents a novel robust adaptive filtering scheme based on the interactive use of statistical noise information and the ideas developed originally for efficient algorithmic solutions to the convex feasibility problems. The statistical noise information is quantitatively formulated as stochastic property closed convex sets by the simple design formulae developed in this paper. A simple set-theoretic inspection also leads to an important statistical reason for the sensitivity to noise of the affine projection algorithm (APA). The proposed adaptive algorithm is computationally efficient and robust to noise because it requires only an iterative parallel projection onto a series of closed half spaces that are highly expected to contain the unknown system to be identified and is free from the computational load of solving a system of linear equations. The numerical examples show that the proposed adaptive filtering scheme realizes dramatically fast and stable convergence for highly colored excited speech like input signals in severe noise situations.


IEEE Transactions on Signal Processing | 2011

Balls

Symeon Chouvardas; Konstantinos Slavakis; Sergios Theodoridis

In this paper, the problem of adaptive distributed learning in diffusion networks is considered. The algorithms are developed within the convex set theoretic framework. More specifically, they are based on computationally simple geometric projections onto closed convex sets. The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale. Moreover, the possibility that some of the nodes may fail is also considered and it is addressed by employing robust statistics loss functions. Such loss functions can easily be accommodated in the adopted algorithmic framework; all that is required from a loss function is convexity. Under some mild assumptions, the proposed algorithms enjoy monotonicity, asymptotic optimality, asymptotic consensus, strong convergence and linear complexity with respect to the number of unknown parameters. Finally, experiments in the context of the system-identification task verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been developed for adaptive distributed learning.


IEEE Signal Processing Magazine | 2014

An efficient robust adaptive filtering algorithm based on parallel subgradient projection techniques

Konstantinos Slavakis; Georgios B. Giannakis; Gonzalo Mateos

With pervasive sensors continuously collecting and storing massive amounts of information, there is no doubt this is an era of data deluge. Learning from these large volumes of data is expected to bring significant science and engineering advances along with improvements in quality of life. However, with such a big blessing come big challenges. Running analytics on voluminous data sets by central processors and storage units seems infeasible, and with the advent of streaming data sources, learning must often be performed in real time, typically without a chance to revisit past entries. Workhorse signal processing (SP) and statistical learning tools have to be re-examined in todays high-dimensional data regimes. This article contributes to the ongoing cross-disciplinary efforts in data science by putting forth encompassing models capturing a wide range of SP-relevant data analytic tasks, such as principal component analysis (PCA), dictionary learning (DL), compressive sampling (CS), and subspace clustering. It offers scalable architectures and optimization algorithms for decentralized and online learning problems, while revealing fundamental insights into the various analytic and implementation tradeoffs involved. Extensions of the encompassing models to timely data-sketching, tensor- and kernel-based learning tasks are also provided. Finally, the close connections of the presented framework with several big data tasks, such as network visualization, decentralized and dynamic estimation, prediction, and imputation of network link load traffic, as well as imputation in tensor-based medical imaging are highlighted.


IEEE Transactions on Signal Processing | 2008

Adaptive Robust Distributed Learning in Diffusion Sensor Networks

Konstantinos Slavakis; Sergios Theodoridis; Isao Yamada

The goal of this paper is to derive a novel online algorithm for classification in reproducing kernel hilbert spaces (RKHS) by exploiting projection-based adaptive filtering tools. The paper brings powerful convex analytic and set theoretic estimation arguments in machine learning by revisiting the standard kernel-based classification as the problem of finding a point which belongs to a closed halfspace (a special closed convex set) in an RKHS. In this way, classification in an online setting, where data arrive sequentially, is viewed as the problem of finding a point (classifier) in the nonempty intersection of an infinite sequence of closed halfspaces in the RKHS. Convex analysis is also used to introduce sparsification arguments in the design by imposing an additional simple convex constraint on the norm of the classifier. An algorithmic solution to the resulting optimization problem, where new convex constraints are added every time instant, is given by the recently introduced adaptive projected subgradient method (APSM), which generalizes a number of well-known projection-based adaptive filtering algorithms such as the classical normalized least mean squares (NLMS) and the affine projection algorithm (APA). Under mild conditions, the generated sequence of estimates enjoys monotone approximation, strong convergence, asymptotic optimality, and a characterization of the limit point. Further, we show that the additional convex constraint on the norm of the classifier naturally leads to an online sparsification of the resulting kernel series expansion. We validate the proposed design by considering the adaptive equalization problem of a nonlinear channel, and by comparing it with classical as well as with recently developed stochastic gradient descent techniques.


IEEE Transactions on Signal Processing | 2012

Modeling and Optimization for Big Data Analytics: (Statistical) learning tools for our era of data deluge

Symeon Chouvardas; Konstantinos Slavakis; Yannis Kopsinis; Sergios Theodoridis

In this paper, a sparsity promoting adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale. At each time instance and at each node of the network, a closed convex set, known as property set, is constructed based on the received measurements; this defines the region in which the solution is searched for. In this paper, the property sets take the form of hyperslabs. The goal is to find a point that belongs to the intersection of these hyperslabs. To this end, sparsity encouraging variable metric projections onto the hyperslabs have been adopted. In addition, sparsity is also imposed by employing variable metric projections onto weighted l1 balls. A combine adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys monotonicity, asymptotic optimality and strong convergence to a point that lies in the consensus subspace. Finally, numerical examples verify the validity of the proposed scheme compared to other algorithms, which have been developed in the context of sparse adaptive learning.


Numerical Functional Analysis and Optimization | 2006

Online Kernel-Based Classification Using Adaptive Projection Algorithms

Konstantinos Slavakis; Isao Yamada; Nobuhiko Ogura

This paper presents an algorithmic solution, the adaptive projected subgradient method, to the problem of asymptotically minimizing a certain sequence of non-negative continuous convex functions over the fixed point set of a strongly attracting nonexpansive mapping in a real Hilbert space. The method generalizes Polyaks subgradient algorithm for the convexly constrained minimization of a fixed nonsmooth function. By generating a strongly convergent and asymptotically optimal point sequence, the proposed method not only offers unifying principles for many projection-based adaptive filtering algorithms but also enhances the adaptive filtering methods with the set theoretic estimations armory by allowing a variety of a priori information on the estimandum in the form, for example, of multiple intersecting closed convex sets.


IEEE Transactions on Signal Processing | 2009

A Sparsity Promoting Adaptive Algorithm for Distributed Learning

Konstantinos Slavakis; Sergios Theodoridis; Isao Yamada

This paper establishes a new paradigm for convexly constrained adaptive learning in reproducing kernel Hilbert spaces (RKHS). Although the technique is of a general nature, we present it in the context of the beamforming problem. A priori knowledge, like beampattern specifications and constraints concerning robustness against steering vector errors, takes the form of multiple closed convex sets in a high (possibly infinite) dimensional RKHS. Every robustness constraint is shown to be equivalent to a min-max optimization task formed by means of the robust statistics epsiv -insensitive loss function. Such a multiplicity of specifications turns out to obtain a simple expression by using the rich frame of fixed-point sets of certain mappings defined in a Hilbert space. Moreover, the cost function, that the final solution has to optimize, is expressed as an infinite sequence of convex, nondifferentiable loss functions, springing from the sequence of the incoming training data. A novel adaptive beamforming design, of linear complexity with respect to the number of unknown parameters, to such a constrained nonlinear learning problem is derived by employing a very recently developed version of the adaptive projected subgradient method (APSM). The method produces a sequence that, under mild conditions, exhibits properties like the strong convergence to a beamformer that satisfies all of the imposed constraints, and in the meantime asymptotically minimizes the sequence of the loss functions imposed by the training data. The numerical examples demonstrate that the proposed method displays increased resolution in cases where the classical linear beamforming solutions collapse. Moreover, it leads to solutions, which are in agreement with the imposed a priori knowledge, as opposed to unconstrained online kernel regression techniques.


IEEE Transactions on Signal Processing | 2007

The adaptive projected subgradient method over the fixed point set of strongly attracting nonexpansive mappings

Konstantinos Slavakis; Isao Yamada

This paper uses the hybrid steepest descent method (HSDM) to design robust smart antennas. Several design criteria as well as robustness are mathematically described by a finite collection of closed convex sets in a real Euclidean space. Desirable beamformers are defined as points of the generalized convex feasible set which is well defined even in the case of inconsistent design criteria. A quadratic cost function is formed by the correlations of the incoming data, and the HSDM constructs a point sequence that (strongly) converges to the (unique) minimizer of the cost function over the generalized convex feasible set. Numerical examples validate the proposed design.

Collaboration


Dive into the Konstantinos Slavakis's collaboration.

Top Co-Authors

Avatar

Sergios Theodoridis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Isao Yamada

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yannis Kopsinis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pantelis Bouboulis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge