Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karin Schnass is active.

Publication


Featured researches published by Karin Schnass.


IEEE Transactions on Information Theory | 2008

Compressed Sensing and Redundant Dictionaries

Holger Rauhut; Karin Schnass; Pierre Vandergheynst

This paper extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via basis pursuit (BP) from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing, and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.


IEEE Transactions on Signal Processing | 2008

Dictionary Preconditioning for Greedy Algorithms

Karin Schnass; Pierre Vandergheynst

This paper introduces the concept of sensing dictionaries. It presents an alteration of greedy algorithms like thresholding or (orthogonal) matching pursuit which improves their performance in finding sparse signal representations in redundant dictionaries while maintaining the same complexity. These algorithms can be split into a sensing and a reconstruction step, and the former will fail to identify correct atoms if the cumulative coherence of the dictionary is too high. We thus modify the sensing step by introducing a special sensing dictionary. The correct selection of components is then determined by the cross cumulative coherence which can be considerably lower than the cumulative coherence. We characterize the optimal sensing matrix and develop a constructive method to approximate it. Finally, we compare the performance of thresholding and OMP using the original and modified algorithms.


IEEE Transactions on Information Theory | 2010

Dictionary Identification—Sparse Matrix-Factorization via

Rémi Gribonval; Karin Schnass

This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ<sub>1</sub>-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y<sub>1</sub> . . . y<sub>N</sub>), y<sub>n</sub> ∈ ℝ<sup>d</sup> of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x<sub>1</sub> . . . x<sub>N</sub>), x<sub>n</sub> ∈ ℝ<sup>K</sup>, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ<sub>1</sub>-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.


IEEE Signal Processing Letters | 2007

\ell_1

Karin Schnass; Pierre Vandergheynst

In this letter, we show that with high probability, the thresholding algorithm can recover signals that are sparse in a redundant dictionary as long as the 2-Babel function is growing slowly. This implies that it can succeed for sparsity levels up to the order of the ambient dimension. The theoretical bounds are illustrated with numerical simulations. As an application of the theory, sensing dictionaries for optimal average performance are characterized, and their performance is tested numerically.


Applied and Computational Harmonic Analysis | 2014

-Minimization

Karin Schnass

Abstract This article gives theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ R d × K can be recovered as local minimum of the minimisation criterion underlying K-SVD from a set of N training signals y n = Φ x n . A theoretical analysis of the problem leads to two types of identifiability results assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution. First, asymptotic results showing that in expectation the generating dictionary can be recovered exactly as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. Second, based on the asymptotic results it is demonstrated that given a finite number of training samples N, such that N / log ⁡ N = O ( K 3 d ) , except with probability O ( N − K d ) there is a local minimum of the K-SVD criterion within distance O ( K N − 1 / 4 ) to the generating dictionary.


international conference on acoustics, speech, and signal processing | 2007

Average Performance Analysis for Thresholding

Rémi Gribonval; Boris Mailhé; Holger Rauhut; Karin Schnass; Pierre Vandergheynst

This paper introduces p-thresholding, an algorithm to compute simultaneous sparse approximations of multichannel signals over redundant dictionaries. We work out both worst case and average case recovery analyses of this algorithm and show that the latter results in much weaker conditions on the dictionary. Numerical simulations confirm our theoretical findings and show that p-thresholding is an interesting low complexity alternative to simultaneous greedy or convex relaxation algorithms for processing sparse multichannel signals with balanced coefficients.


international symposium on communications, control and signal processing | 2008

On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD

Rémi Gribonval; Karin Schnass

Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard optimization algorithms, in particular the convex l1 minimization approach. Here we investigate the related problem of inferring the dictionary from training data, with an approach where l1- minimization is used as a criterion to select a dictionary. We restrict our analysis to basis learning and identify necessary / sufficient / necessary and sufficient conditions on ideal (not necessarily very sparse) coefficients of the training data in an ideal basis to guarantee that the ideal basis is a strict local optimum of the A -minimization criterion among (not necessarily orthogonal) bases of normalized vectors. We illustrate these conditions on deterministic as well as toy random models in dimension two and highlight the main challenges that remain open by this preliminary theoretical results.


international conference on acoustics, speech, and signal processing | 2011

Average Case Analysis of Multichannel Thresholding

Karin Schnass; Jan Vybíral

This paper presents a simple randomised algorithm for recovering high-dimensional sparse functions, i.e. functions ƒ : [0, 1]<sup>d</sup> → ℝ which depend effectively only on k out of d variables, meaning ƒ(x<inf>1</inf>, …, x<inf>d</inf>) = g(x<inf>i1</inf>, …, x<inf>ik</inf> ), where the indices 1 ≤ i<inf>1</inf> &#60; i<inf>2</inf> &#60; … &#60; i<inf>k</inf> ≤ d are unknown. It is shown that (under certain conditions on g) this algorithm recovers the k unknown coordinates with probability at least 1–6 exp(−L) using only O(k(L+log k)(L+log d)) samples of ƒ.


international conference on acoustics, speech, and signal processing | 2010

Some recovery conditions for basis learning by L1-minimization

Karin Schnass; Pierre Vandergheynst

We present a new and computationally efficient scheme for classifying signals into a fixed number of known classes. We model classes as subspaces in which the corresponding data is well represented by a dictionary of features. In order to ensure low misclassification, the subspaces should be incoherent so that features of a given class cannot represent efficiently signals from another. We propose a simple iterative strategy to learn dictionaries which are are the same time good for approximating within a class and also discriminant. Preliminary tests on a standard face images database show competitive results.


international symposium on communications, control and signal processing | 2008

Compressed learning of high-dimensional sparse functions

Karin Schnass; Pierre Vandergheynst

In this article we present a signal model for classification based on a low dimensional dictionary embedded into the high dimensional signal space. We develop an alternate projection algorithm to find the embedding and the dictionary and finally test the classification performance of our scheme in comparison to Fishers LDA.

Collaboration


Dive into the Karin Schnass's collaboration.

Top Co-Authors

Avatar

Pierre Vandergheynst

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Valeriya Naumova

Simula Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Rémi Gribonval

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar

Jan Vybíral

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Pascal Frossard

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge