Sjoerd Dirksen
RWTH Aachen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sjoerd Dirksen.
symposium on the theory of computing | 2015
Jean Bourgain; Sjoerd Dirksen; Jelani Nelson
Let Φ∈Rm x n be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s. For a subset T of the unit sphere and ε∈(0,1/2), we study settings for m,s to ensure EΦ supx∈ T |Φ x|22 - 1| < ε, i.e. so that Φ preserves the norm of every x ∈ T simultaneously and multiplicatively up to 1+ε. We introduce a new complexity parameter, which depends on the geometry of T, and show that it suffices to choose s and m such that this parameter is small. Our result is a sparse analog of Gordons theorem, which was concerned with a dense Φ having i.i.d. Gaussian entries. We qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries. Our work also implies new results in using the sparse Johnson-Lindenstrauss transform in randomized linear algebra, compressed sensing, manifold learning, and constrained least squares problems such as the Lasso.
Foundations of Computational Mathematics | 2016
Sjoerd Dirksen
We present a theory for Euclidean dimensionality reduction with subgaussian matrices which unifies several restricted isometry property and Johnson–Lindenstrauss-type results obtained earlier for specific datasets. In particular, we recover and, in several cases, improve results for sets of sparse and structured sparse vectors, low-rank matrices and tensors, and smooth manifolds. In addition, we establish a new Johnson–Lindenstrauss embedding for datasets taking the form of an infinite union of subspaces of a Hilbert space.
Bulletin of The London Mathematical Society | 2013
Sjoerd Dirksen; Éric Ricard
Normalized free semi-circular random variables satisfy an upper Khintchine inequality in
Transactions of the American Mathematical Society | 2014
Sjoerd Dirksen
L_\infty
Discrete and Computational Geometry | 2018
Sjoerd Dirksen; Alexander Stollenwerk
. We show that this implies the corresponding upper Khintchine inequality in any noncommutative Banach function space. As applications, we obtain a very simple proof of a well-known interpolation result for row and column operator spaces and, moreover, answer an open question on noncommutative moment inequalities concerning a paper by Bekjan and Chen.
Journal of Complexity | 2018
Sjoerd Dirksen; Tino Ullrich
We present a new, elementary proof of Boyds interpolation theorem. Our approach naturally yields a noncommutative version of this result and even allows for the interpolation of certain operators on l^1-valued noncommutative symmetric spaces. By duality we may interpolate several well-known noncommutative maximal inequalities. In particular we obtain a version of Doobs maximal inequality and the dual Doob inequality for noncommutative symmetric spaces. We apply our results to prove the Burkholder-Davis-Gundy and Burkholder-Rosenthal inequalities for noncommutative martingales in these spaces.
international conference on sampling theory and applications | 2017
Sjoerd Dirksen; Alexander Stollenwerk
We consider the problem of encoding a finite set of vectors into a small number of bits while approximately retaining information on the angular distances between the vectors. By deriving improved variance bounds related to binary Gaussian circulant embeddings, we largely fix a gap in the proof of the best known fast binary embedding method. Our bounds also show that well-spreadness assumptions on the data vectors, which were needed in earlier work on variance bounds, are unnecessary. In addition, we propose a new binary embedding with a faster running time on sparse data.
international conference on sampling theory and applications | 2017
Sjoerd Dirksen; Tino Ullrich
Abstract We consider the problem of determining the asymptotic order of the Gelfand numbers of mixed-(quasi-)norm embeddings l p b ( l q d ) ↪ l r b ( l u d ) given that p ≤ r and q ≤ u , with emphasis on cases with p ≤ 1 and/or q ≤ 1 . These cases turn out to be related to structured sparsity. We obtain sharp bounds in a number of interesting parameter constellations. Our new matching bounds for the Gelfand numbers of the embeddings of l 1 b ( l 2 d ) and l 2 b ( l 1 d ) into l 2 b ( l 2 d ) imply optimality assertions for the recovery of block-sparse and sparse-in-levels vectors, respectively. In addition, we apply our sharp estimates for l p b ( l q d ) -spaces to obtain new two-sided estimates for the Gelfand numbers of multivariate Besov space embeddings in regimes of small mixed smoothness. It turns out that in some particular cases these estimates show the same asymptotic behavior as in the univariate situation. In the remaining cases they differ at most by a log log factor from the univariate bound.
Journal of the American Chemical Society | 2006
Anouk Dirksen; Sjoerd Dirksen; Tilman M. Hackeng; Philip E. Dawson
We consider the problem of encoding a finite set of vectors into a small number of bits while approximately retaining information on the angular distances between the vectors. By deriving improved variance bounds related to binary Gaussian circulant embeddings, we largely fix a gap in the proof of the best known fast binary embedding method. Our bounds also show that well-spreadness assumptions on the data vectors, which were needed in earlier work on variance bounds, are unnecessary. In addition, we propose a new binary embedding with a faster running time on sparse data.
Electronic Journal of Probability | 2015
Sjoerd Dirksen
We consider the problem of determining the asymptotic order of the Gelfand numbers of mixed-(quasi-)norm embeddings ℓ<inf>p</inf><sup>b</sup>(ℓ<inf>q</inf><sup>d</sup>) ↪ ℓ<inf>r</inf><sup>b</sup>(ℓ<inf>u</inf><sup>d</sup>) given that p ≤ r and q ≤ u, with emphasis on cases with p ≤ 1 and/or q ≤ 1. These cases turn out to be related to structured sparsity. We obtain sharp bounds in a number of interesting parameter constellations. Our new matching bounds for the Gelfand numbers of the embeddings of ℓ<inf>1</inf><sup>b</sup>(ℓ<inf>2</inf><sup>d</sup>) and ℓ<inf>2</inf><sup>b</sup>(ℓ<inf>1</inf><sup>d</sup>) into ℓ<inf>2</inf><sup>b</sup>(ℓ<inf>2</inf><sup>d</sup>) imply optimality assertions for the recovery of block-sparse and sparse-in-levels vectors, respectively. In addition, we apply the sharp estimates for ℓ<inf>p</inf><sup>b</sup>(ℓ<inf>q</inf><sup>d</sup>)-spaces to obtain new two-sided estimates for the Gelfand numbers of multivariate Besov space embeddings in regimes of small mixed smoothness. It turns out that in some particular cases these estimates show the same asymptotic behaviour as in the univariate situation. In the remaining cases they differ at most by a log log factor from the univariate bound.