Georg Kail
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Georg Kail.
IEEE Transactions on Signal Processing | 2012
Georg Kail; Jean-Yves Tourneret; Franz Hlawatsch; Nicolas Dobigeon
For blind deconvolution of an unknown sparse sequence convolved with an unknown pulse, a powerful Bayesian method employs the Gibbs sampler in combination with a Bernoulli-Gaussian prior modeling sparsity. In this paper, we extend this method by introducing a minimum distance constraint for the pulses in the sequence. This is physically relevant in applications including layer detection, medical imaging, seismology, and multipath parameter estimation. We propose a Bayesian method for blind deconvolution that is based on a modified Bernoulli-Gaussian prior including a minimum distance constraint factor. The core of our method is a partially collapsed Gibbs sampler (PCGS) that tolerates and even exploits the strong local dependencies introduced by the minimum distance constraint. Simulation results demonstrate significant performance gains compared to a recently proposed PCGS. The main advantages of the minimum distance constraint are a substantial reduction of computational complexity and of the number of spurious components in the deconvolution result.
international conference on acoustics, speech, and signal processing | 2011
Chao Lin; Georg Kail; Jean-Yves Tourneret; Corinne Mailhes; Franz Hlawatsch
The delineation of P and T waves is important for the interpretation of ECG signals. We propose a Bayesian detection-estimation algorithm for simultaneous detection, delineation, and estimation of P and T waves. A block Gibbs sampler exploits the strong local dependencies in ECG signals by imposing block constraints on the P and T wave locations. The proposed algorithm is evaluated on the annotated QT database and compared with two classical algorithms.
international conference on acoustics, speech, and signal processing | 2009
Georg Kail; Clemens Novak; Bernd Hofer; Franz Hlawatsch
We consider the parametric analysis of frequency-domain optical coherence tomography (OCT) signals. A Monte Carlo (Gibbs sampler) detection-estimation method for determining the depths and reflection coefficients of tissue interfaces (reflective sites in the tissue) is proposed. Our method is blind since it estimates the instrumentation-dependent “fringe” function along with the tissue parameters. Sparsity of the detected interfaces is enforced by an impulse detector and a modified Bernoulli-Gaussian prior with a minimum distance constraint. Numerical results using synthetic and real signals demonstrate the excellent performance and fast convergence of our method.
Signal Processing | 2014
Chao Lin; Georg Kail; Audrey Giremus; Corinne Mailhes; Jean-Yves Tourneret; Franz Hlawatsch
For ECG interpretation, the detection and delineation of P and T waves are challenging tasks. This paper proposes sequential Bayesian methods for simultaneous detection, threshold-free delineation, and waveform estimation of P and T waves on a beat-to-beat basis. By contrast to state-of-the-art methods that process multiple-beat signal blocks, the proposed Bayesian methods account for beat-to-beat waveform variations by sequentially estimating the waveforms for each beat. Our methods are based on Bayesian signal models that take into account previous beats as prior information. To estimate the unknown parameters of these Bayesian models, we first propose a block Gibbs sampler that exhibits fast convergence in spite of the strong local dependencies in the ECG signal. Then, in order to take into account all the information contained in the past rather than considering only one previous beat, a sequential Monte Carlo method is presented, with a marginalized particle filter that efficiently estimates the unknown parameters of the dynamic model. Both methods are evaluated on the annotated QT database and observed to achieve significant improvements in detection rate and delineation accuracy compared to state-of-the-art methods, thus providing promising approaches for sequential P and T wave analysis.
international conference on acoustics, speech, and signal processing | 2011
Georg Kail; Klaus Witrisal; Franz Hlawatsch
We propose a Monte Carlo method for determining the parameters of multipath components (MPCs) for ultra-wideband channels. A partially collapsed Gibbs sampler is used for jointly estimating the number, times-of-arrival, angles-of-arrival, and amplitudes of the MPCs as well as the sounding pulse from signals received by a 2D antenna array. Our system model accounts for propagation delays between the receive antennas. Temporal-angular sparsity of the detected MPCs is ensured by a 2D minimum distance constraint. Numerical results for synthetic and real signals demonstrate the excellent performance and fast convergence of our method.
international conference on acoustics, speech, and signal processing | 2010
Georg Kail; Jean-Yves Tourneret; Franz Hlawatsch; Nicolas Dobigeon
We consider Bayesian detection/classification of discrete random parameters that are strongly dependent locally due to some deterministic local constraint. Based on the recently introduced partially collapsed Gibbs sampler (PCGS) principle, we develop a Markov chain Monte Carlo method that tolerates and even exploits the challenging probabilistic structure imposed by deterministic local constraints. We study the application of our method to the practically relevant case of nonuniformly spaced binary pulses with a known minimum distance. Simulation results demonstrate significant performance gains of our method compared to a recently proposed PCGS that is not specifically designed for the local constraint.
2009 IEEE/SP 15th Workshop on Statistical Signal Processing | 2009
Georg Kail; Franz Hlawatsch; Clemens Novak
We propose a Bayesian method for detecting multiple events in signals under the practically relevant assumption that successive events may not be arbitrarily close and distant events are effectively independent. Our detector has low complexity since it involves only the (Monte Carlo approximation to the) one-dimensional marginal posteriors. However, its performance is good since the metric it minimizes depends on the entire event sequence. We also describe an efficient sequential implementation of our detector that is based on a tree representation and a recursive metric computation.
IEEE Journal of Selected Topics in Signal Processing | 2016
Georg Kail; Sundeep Prabhakar Chepuri; Geert Leus
The tasks of online data reduction and outlier rejection are both of high interest when large amounts of data are to be processed for inference. Rather than performing these tasks separately, we propose a joint approach, i.e., robust censoring. We formulate the problem as a non-convex optimization problem based on the data model for outlier-free data, without requiring prior model assumptions about the outlier perturbations. Moreover, our approach is general in that it is not restricted to any specific data model and does not rely on linearity, uncorrelated measurements, or additive Gaussian noise. For a given desired compression rate, the choice of the reduced dataset is optimal in the sense that it jointly maximizes the likelihood together with the inferred model parameters. An extension of the problem formulation allows for taking the average estimation performance into account in a hybrid optimality criterion. To solve the problem of robust censoring, we propose a Metropolis-Hastings sampler method that operates on small subsets of the data, thus limiting the computational complexity. As a practical example, the problem is specialized to the application of robust censoring for target localization. Simulation results confirm the superiority of the proposed method compared to other approaches.
international workshop on signal processing advances in wireless communications | 2015
Georg Kail; Sundeep Prabhakar Chepuri; Geert Leus
Existing methods for smart data reduction are typically sensitive to outlier data that do not follow postulated data models. We propose robust censoring as a joint approach unifying the concepts of robust learning and data censoring. We focus on linear inverse problems and formulate robust censoring through a sparse sensing operator, which is a non-convex bilinear problem. We propose two solvers, one using alternating descent and the other using Metropolis-Hastings sampling. Although the latter is based on the concept of Bayesian sampling, we avoid confining the outliers to a specific model. Numerical results show that the proposed Metropolis-Hastings sampler outperforms state-of-the-art robust estimators.
information theory and applications | 2015
Georg Kail; Geert Leus
Compressive covariance sampling (CCS) methods that estimate the correlation function from compressive measurements have achieved great compression rates lately. In stationary autoregressive (AR) processes, the power spectrum is fully determined by the AR parameters, and vice versa. Therefore, compressive estimation of AR parameters amounts to CCS for such signals. However, previous CCS methods typically do not fully exploit the structure of AR power spectra. On the other hand, traditional AR parameter estimation methods cannot be used when only a compressed version of the AR signal is observed. We propose a Bayesian algorithm for estimating AR parameters from compressed observations, using a Metropolis-Hastings sampler. Simulation results confirm the promising performance of the proposed method.