Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Irofti is active.

Publication


Featured researches published by Paul Irofti.


IEEE Signal Processing Letters | 2017

Regularized K-SVD

Bogdan Dumitrescu; Paul Irofti

The problem of dictionary learning (DL) for sparse representations can be approximately solved by several algorithms. Regularization of the optimization objective (representation error) was proved useful, since it avoids possible bottlenecks due to nearly linearly dependent atoms. We show here how the well-known K-SVD algorithm can be adapted to the regularized DL problem, despite previous claims that such an adaptation seems impossible. We also provide numerical evidence that regularized K-SVD is better than Simultaneous Codeword Optimization, the most prominent algorithm dedicated to the regularized DL problem.


international workshop on machine learning for signal processing | 2016

Low dimensional subspace finding via size-reducing dictionary learning

Bogdan Dumitrescu; Paul Irofti

We present a dictionary learning algorithm that aims to reduce the size of the dictionary to a parsimonious value during the learning process. The sparse coding step uses a weighted Orthogonal Matching Pursuit favoring atoms that enter more representations. The dictionary update step optimizes a regularized error, encouraging the apparition of zero rows in the representation matrix; the corresponding unused atoms are eliminated. The algorithm is extended to the case of incomplete data. Besides dictionary learning, the algorithm is also shown to be useful for finding low-dimensional subspaces. Such versatility is a feature with little precedent. Numerical examples show good convergence properties.


international conference on control systems and computer science | 2015

Overcomplete Dictionary Design: The Impact of the Sparse Representation Algorithm

Paul Irofti; Bogdan Dumitrescu

The design of dictionaries for sparse representations is typically done by iterating two stages: compute sparse representations for the fixed dictionary and update the dictionary using the fixed representations. Most of the innovation in recent work was proposed for the update stage, while the representation stage was routinely done with Orthogonal Matching Pursuit (OMP), due to its low complexity. We investigate here the use of other greedy sparse representation algorithms, more computationally demanding than OMP but still with convenient complexity. These algorithms include a new proposal, the projection-based Orthogonal Least Squares. It turns out that the effect of using better representation algorithms may be more significant than improving the update stage, sometimes even leveling the performance of different update algorithms. The numerous experimental results presented here suggest which are the best combinations of methods and open new ways of designing and using dictionaries for sparse representations.


Archive | 2018

Optimizing Dictionary Size

Bogdan Dumitrescu; Paul Irofti

Until now the number of atoms was an input parameter of the DL algorithms. Its choice was left to the user, leading usually to a trial and error approach. We discuss here possible ways to optimize the number of atoms. The most common way to pose the DL problem is to impose a certain representation error and attempt to find the smallest dictionary that can ensure that error. The algorithms solving this problem use the sparse coding and dictionary update ideas of the standard algorithms, but add and remove atoms during the DL iterations. They start either with a small number of atoms, then try to add new atoms that are able to significantly reduce the error, or with a large number of atoms, then remove the less useful ones; the growing strategy seems more successful and is likely to have the lowest complexity. Working on a general DL structure for designing dictionaries with variable size, we present some of the algorithms with best results, in particular Stagewise K-SVD and DLENE (DL with efficient number of elements); the first serves also as basis for an initialization algorithm that leads to better results than the typical random initializations. We present the main ideas of a few other methods, insisting on those based on clustering, in particular on the mean shift algorithm. Finally, we discuss how OMP can be modified to reduce the number of atoms without impeding too much on the quality of the representation.


Archive | 2018

Other Views on the DL Problem

Bogdan Dumitrescu; Paul Irofti

The dictionary learning problem can be posed in different ways, as we have already seen. In this chapter we first take a look at the DL problem where the sparsity level is not bounded for each training signal; instead, we bound the average sparsity level. This allows better overall representation power, due to the ability to place the nonzeros where they are most needed. The simplest way to pose the problem is to combine the error objective with an l1 penalty that encourages sparsity in the whole representation matrix X. Several algorithms can solve this problem; we present those based on coordinate descent in AK-SVD style, on majorization and on proximal gradient. The latter approach can also be used with a 0-norm penalization. Other modifications of the objective include the addition of a regularization term (elastic net) or of a coherence penalty. Another view is given by task-driven DL, where the optimization objective is taken directly from the application and the sparse representation is only an intermediary tool. Returning to the standard DL problem, we present two new types of algorithms. One is based on selection: the atoms are chosen from a pool of candidates and so are no longer free variables. The other is online DL, where the training signals are assumed to be available in small bunches and the dictionary is updated for each bunch; online DL can thus adapt the dictionary to a time-varying set of signals, following the behavior of the generating source. Two online algorithms are presented, one based on coordinate descent, the other inspired by the classic recursive least squares (RLS). Finally, we tackle the DL problem with incomplete data, where some of the signals elements are missing, and present a version of AK-SVD suited to this situation.


Archive | 2018

Kernel Dictionary Learning

Bogdan Dumitrescu; Paul Irofti

Sparse representations are linear by construction, a fact that can hinder their use in classification problems. Building vectors of characteristics from the signals to be classified can overcome the difficulties and is automated by employing kernels, which are functions that quantify the similarities between two vectors. DL can be extended to kernel form by assuming a specific form of the dictionary. DL algorithms have the usual form, comprising sparse coding and dictionary update. We present the kernel versions of OMP and of the most common update algorithms: MOD, SGK, AK-SVD, and K-SVD. The kernel methods use many operations involving a square kernel matrix whose size is equal to the number of signals; hence, their complexities are significantly higher than those of the standard methods. We present two ideas for reducing the size of the problem, the most prominent being that based on Nystrom sampling. Finally, we show how kernel DL can be adapted to classification methods involving sparse representations, in particular SRC and discriminative DL.


Archive | 2018

Dictionary Learning Problem

Bogdan Dumitrescu; Paul Irofti

Dictionary learning can be formulated as an optimization problem in several ways. We present here the basic form, where the representation error is minimized under the constraint of sparsity, and discuss several views and relations with other data analysis and signal processing problems. We study some properties of the DL problem and their implications for the optimization process and explain the two subproblems that are crucial in DL algorithms: sparse coding and dictionary update. In preparation to algorithms analysis and comparisons, we present the main test problems, dealing with representation error and dictionary recovery; we give all details of test procedures, using either artificial data or images. Finally, as an appetizer for the remainder of the book, we illustrate the wide use of DL algorithms in the context of sparse representations for several applications like denoising, inpainting, compression, compressed sensing, classification.


Archive | 2018

Regularization and Incoherence

Bogdan Dumitrescu; Paul Irofti

A dictionary should be faithful to the signals it represents, in the sense that the sparse representation error in learning is small, but also must be reliable when recovering sparse representations. A direct way to obtain good recovery guarantees is to modify the objective of the DL optimization such that the resulted dictionary is incoherent, meaning that the atoms are generally far from each other. Alternatively, we can explicitly impose mutual coherence bounds. A related modification of the objective is regularization, in the usual form encountered in least squares problems. Regularization has the additional benefit of making the DL process avoid bottlenecks generated by ill-conditioned dictionaries. We present several algorithms for regularization and promoting incoherence and illustrate their benefits. The K-SVD family can be adapted to such approaches, with good results in the case of regularization. Other methods for obtaining incoherence are based on the gradient of a combined objective including the frame potential or by including a decorrelation step in the standard algorithms. Such a step is based on successive projections on two sets whose properties are shared by the Gram matrix corresponding to the dictionary, which reduce mutual coherence, and rotations of the dictionary, which regain the adequacy to the training signals. We also give a glimpse on the currently most efficient methods that aim at the minimization of the mutual coherence of a frame, regardless of the training signals.


international conference on telecommunications | 2016

Overcomplete dictionary learning with Jacobi atom updates

Paul Irofti; Bogdan Dumitrescu

Dictionary learning for sparse representations is traditionally approached with sequential atom updates, in which an optimized atom is used immediately for the optimization of the next atoms. We propose instead a Jacobi version, in which groups of atoms are updated independently, in parallel. Extensive numerical evidence for sparse image representation shows that the parallel algorithms, especially when all atoms are updated simultaneously, give better dictionaries than their sequential counterparts.


international conference on system theory, control and computing | 2015

Sparse denoising with learned composite structured dictionaries

Paul Irofti

In the sparse representation field recent studies using composite dictionaries have shown encouraging results in performing noise removal. In this paper we look at dictionary composition in the particular case of dictionaries structured as a union of orthonormal bases. Our study focuses on denoising performance, providing new algorithms that outperform existing solutions, and also speed, resulting in different algorithms that execute a lot faster with a negligible denoising penalty.

Collaboration


Dive into the Paul Irofti's collaboration.

Top Co-Authors

Avatar

Bogdan Dumitrescu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Florin Stoican

Politehnica University of Bucharest

View shared research outputs
Researchain Logo
Decentralizing Knowledge