Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Jung is active.

Publication


Featured researches published by Alexander Jung.


IEEE Transactions on Information Theory | 2013

Compressive Spectral Estimation for Nonstationary Random Processes

Alexander Jung; Georg Tauböck; Franz Hlawatsch

Estimating the spectral characteristics of a nonstationary random process is an important but challenging task, which can be facilitated by exploiting structural properties of the process. In certain applications, the observed processes are underspread, i.e., their time and frequency correlations exhibit a reasonably fast decay, and approximately time-frequency sparse, i.e., a reasonably large percentage of the spectral values are small. For this class of processes, we propose a compressive estimator of the discrete Rihaczek spectrum (RS). This estimator combines a minimum variance unbiased estimator of the RS (which is a smoothed Rihaczek distribution using an appropriately designed smoothing kernel) with a compressed sensing technique that exploits the approximate time-frequency sparsity. As a result of the compression stage, the number of measurements required for good estimation performance can be significantly reduced. The measurements are values of the ambiguity function of the observed signal at randomly chosen time and frequency lag positions. We provide bounds on the mean-square estimation error of both the minimum variance unbiased RS estimator and the compressive RS estimator, and we demonstrate the performance of the compressive estimator by means of simulation results. The proposed compressive RS estimator can also be used for estimating other time-dependent spectra (e.g., the Wigner-Ville spectrum), since for an underspread process most spectra are almost equal.


IEEE Transactions on Information Theory | 2011

Unbiased Estimation of a Sparse Vector in White Gaussian Noise

Alexander Jung; Zvika Ben-Haim; Franz Hlawatsch; Yonina C. Eldar

The problem studied in this paper is unbiased estimation of a sparse nonrandom vector corrupted by additive white Gaussian noise. It is shown that while there are infinitely many unbiased estimators for this problem, none of them has uniformly minimum variance. Therefore, the focus is placed on locally minimum variance unbiased (LMVU) estimators. Simple closed-form lower and upper bounds on the variance of LMVU estimators or, equivalently, on the Barankin bound (BB) are derived. These bounds allow an estimation of the threshold region separating the low-signal-to-noise ratio (SNR) and high-SNR regimes, and they indicate the asymptotic behavior of the BB at high SNR. In addition, numerical lower and upper bounds are derived; these are tighter than the closed-form bounds and thus characterize the BB more accurately. Numerical studies compare the proposed characterizations of the BB with established biased estimation schemes, and demonstrate that while unbiased estimators perform poorly at low SNR, they may perform better than biased estimators at high SNR. An interesting conclusion of this analysis is that the high-SNR behavior of the BB depends solely on the value of the smallest nonzero entry of the sparse vector, and that this type of dependence is also exhibited by the performance of certain practical estimators.


IEEE Signal Processing Letters | 2015

Graphical LASSO based Model Selection for Time Series

Alexander Jung; Gabor Hannak; Norbert Goertz

We propose a novel graphical model selection scheme for high-dimensional stationary time series or discrete time processes. The method is based on a natural generalization of the graphical LASSO algorithm, introduced originally for the case of i.i.d. samples, and estimates the conditional independence graph of a time series from a finite length observation. The graphical LASSO for time series is defined as the solution of an l1-regularized maximum (approximate) likelihood problem. We solve this optimization problem using the alternating direction method of multipliers. Our approach is nonparametric as we do not assume a finite dimensional parametric model, but only require the process to be sufficiently smooth in the spectral domain. For Gaussian processes, we characterize the performance of our method theoretically by deriving an upper bound on the probability that our algorithm fails. Numerical experiments demonstrate the ability of our method to recover the correct conditional independence graph from a limited amount of samples.


international conference on communications | 2015

Joint channel estimation and activity detection for multiuser communication systems

Gabor Hannak; Martin Mayer; Alexander Jung; Gerald Matz; Norbert Goertz

We consider overloaded (non-orthogonal) code division multiple access multiuser wireless communication systems with many transmitting users and one central aggregation node, a typical scenario in e.g. machine-to-machine communications. The task of the central node is to detect the set of active devices and separate their data streams, whose number at any time instance is relatively small compared to the total number of devices in the system. We introduce a novel two-step detection procedure: the first step involves the simultaneous transmission of a pilot sequence used for identification of the active devices and the estimation of their respective channel coefficients. In the second step the payload is transmitted by all active devices and received synchronously at the central node. The first step reduces to a compressed sensing (CS) problem due to the relatively small number of simultaneously active devices. Using an efficient CS recovery scheme (approximate message passing), joint activity detection and channel estimation with high reliability is possible, even for extremely large-scale systems. This, in turn, reduces the data detection task to a simple overdetermined system of linear equations that is then solved by classical methods in the second step.


international workshop on signal processing advances in wireless communications | 2016

Scalable graph signal recovery for big data over networks

Alexander Jung; Peter Berger; Gabor Hannak; Gerald Matz

We formulate the recovery of a graph signal from noisy samples taken on a subset of graph nodes as a convex optimization problem that balances the empirical error for explaining the observed values and a complexity term quantifying the smoothness of the graph signal. To solve this optimization problem, we propose to combine the alternating direction method of multipliers with a novel denoising method that minimizes total variation. Our algorithm can be efficiently implemented in a distributed manner using message passing and thus is attractive for big data problems over networks.


international conference on acoustics, speech, and signal processing | 2014

Compressive nonparametric graphical model selection for time series

Alexander Jung; Reinhard Heckel; Helmut Bölcskei; Franz Hlawatsch

We propose a method for inferring the conditional independence graph (CIG) of a high-dimensional discrete-time Gaussian vector random process from finite-length observations. Our approach does not rely on a parametric model (such as, e.g., an autoregressive model) for the vector random process; rather, it only assumes certain spectral smoothness properties. The proposed inference scheme is compressive in that it works for sample sizes that are (much) smaller than the number of scalar process components. We provide analytical conditions for our method to correctly identify the CIG with high probability.


IEEE Transactions on Information Theory | 2016

On the Minimax Risk of Dictionary Learning

Alexander Jung; Yonina C. Eldar; Norbert Görtz

We consider the problem of learning a dictionary matrix from a number of observed signals, which are assumed to be generated via a linear model with a common underlying dictionary. In particular, we derive lower bounds on the minimum achievable worst case mean squared error (MSE), regardless of computational complexity of the dictionary learning (DL) schemes. By casting DL as a classical (or frequentist) estimation problem, the lower bounds on the worst case MSE are derived following an established information-theoretic approach to minimax estimation. The main contribution of this paper is the adaption of these information-theoretic tools to the DL problem in order to derive lower bounds on the worst case MSE of any DL algorithm. We derive three different lower bounds applying to different generative models for the observed signals. The first bound only requires the existence of a covariance matrix of the (unknown) underlying coefficient vector. By specializing this bound to the case of sparse coefficient distributions and assuming the true dictionary satisfies the restricted isometry property, we obtain a lower bound on the worst case MSE of DL methods in terms of the signal-to-noise ratio (SNR). The third bound applies to a more restrictive subclass of coefficient distributions by requiring the non-zero coefficients to be Gaussian. Although the applicability of this bound is the most limited, it is the tightest of the three bounds in the low SNR regime. A particular use of our lower bounds is the derivation of necessary conditions on the required number of observations (sample size), such that DL is feasible, i.e., accurate DL schemes might exist. By comparing these necessary conditions with sufficient conditions on the sample size such that a particular DL technique is successful, we are able to characterize the regimes, where those algorithms are optimal in terms of required sample size.


IEEE Transactions on Signal Processing | 2015

Learning the Conditional Independence Structure of Stationary Time Series: A Multitask Learning Approach

Alexander Jung

We propose a method for inferring the conditional independence graph (CIG) of a high-dimensional Gaussian vector time series (discrete-time process) from a finite-length observation. By contrast to existing approaches, we do not rely on a parametric process model (such as, e.g., an autoregressive model) for the observed random process. Instead, we only require certain smoothness properties (in the Fourier domain) of the process. The proposed inference scheme works even for sample sizes much smaller than the number of scalar process components if the true underlying CIG is sufficiently sparse. A theoretical performance analysis provides sufficient conditions on the sample size such that the new method is consistent asymptotically. Some numerical experiments validate our theoretical performance analysis and demonstrate superior performance of our scheme compared to an existing (parametric) approach in case of model mismatch.


international conference on acoustics, speech, and signal processing | 2010

On unbiased estimation of sparse vectors corrupted by Gaussian noise

Alexander Jung; Zvika Ben-Haim; Franz Hlawatsch; Yonina C. Eldar

We consider the estimation of a sparse parameter vector from measurements corrupted by white Gaussian noise. Our focus is on unbiased estimation as a setting under which the difficulty of the problem can be quantified analytically. We show that there are infinitely many unbiased estimators but none of them has uniformly minimum mean-squared error. We then provide lower and upper bounds on the Barankin bound, which describes the performance achievable by unbiased estimators. These bounds are used to predict the threshold region of practical estimators.


asilomar conference on signals, systems and computers | 2010

A lower bound on the estimator variance for the sparse linear model

Sebastian Schmutzhard; Alexander Jung; Franz Hlawatsch; Zvika Ben-Haim; Yonina C. Eldar

We study the performance of estimators of a sparse nonrandom vector based on an observation which is linearly transformed and corrupted by white Gaussian noise. Using the framework of reproducing kernel Hilbert spaces, we derive a new lower bound on the estimator variance for a given differentiable bias function (including the unbiased case) and an almost arbitrary transformation matrix (including the underdetermined case considered in compressed sensing theory). For the special case of a sparse vector corrupted by white Gaussian noise—i.e., without a linear transformation—and unbiased estimation, our lower bound improves on a previously proposed bound.

Collaboration


Dive into the Alexander Jung's collaboration.

Top Co-Authors

Avatar

Franz Hlawatsch

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yonina C. Eldar

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norbert Goertz

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gabor Hannak

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zvika Ben-Haim

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge