Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Sundin is active.

Publication


Featured researches published by Martin Sundin.


IEEE Signal Processing Letters | 2012

Alternating Least-Squares for Low-Rank Matrix Reconstruction

Dave Zachariah; Martin Sundin; Magnus Jansson; Saikat Chatterjee

For reconstruction of low-rank matrices from undersampled measurements, we develop an iterative algorithm based on least-squares estimation. While the algorithm can be used for any low-rank matrix, it is also capable of exploiting a-priori knowledge of matrix structure. In particular, we consider linearly structured matrices, such as Hankel and Toeplitz, as well as positive semidefinite matrices. The performance of the algorithm, referred to as alternating least-squares (ALS), is evaluated by simulations and compared to the Cramér-Rao bounds.


Signal Processing | 2016

Alternating strategies with internal ADMM for low-rank matrix reconstruction

Kezhi Li; Martin Sundin; Cristian R. Rojas; Saikat Chatterjee; Magnus Jansson

This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications. HighlightsAlternating optimization strategies are good for recovering matrices.Matrices in consideration are low-rank matrices with linear parameterized structures.The algorithm combines an alternating least-squares based strategy with ideas from ADMM.Merging these two strategies leads to a better performance and less consumed time.


international conference on signal processing | 2014

Relevance singular vector machine for low-rank matrix sensing

Martin Sundin; Saikat Chatterjee; Magnus Jansson; Cristian R. Rojas

In this paper we develop a new Bayesian inference method for low rank matrix reconstruction. We call the new method the Relevance Singular Vector Machine (RSVM) where appropriate priors are defined on the singular vectors of the underlying matrix to promote low rank. To accelerate computations, a numerically efficient approximation is developed. The proposed algorithms are applied to matrix completion and matrix reconstruction problems and their performance is studied numerically.


european signal processing conference | 2015

Bayesian learning for time-varying linear prediction of speech

Adria Casamitjana; Martin Sundin; Saikat Chatterjee

We develop Bayesian learning algorithms for estimation of time-varying linear prediction (TVLP) coefficients of speech. Estimation of TVLP coefficients is a naturally underdeter-mined problem. We consider sparsity and subspace based approaches for dealing with the corresponding underdetermined system. Bayesian learning algorithms are developed to achieve better estimation performance. Expectation-maximization (EM) framework is employed to develop the Bayesian learning algorithms where we use a combined prior to model a driving noise (glottal signal) that has both sparse and dense statistical properties. The efficiency of the Bayesian learning algorithms is shown for synthetic signals using spectral distortion measure and formant tracking of real speech signals.


international conference on acoustics, speech, and signal processing | 2013

Beamformers for sparse recovery

Martin Sundin; Dennis Sundman; Magnus Jansson

In sparse recovery from measurement data a common approach is to use greedy pursuit reconstruction algorithms. Most of these algorithms have a correlation filter for detecting active components in the sparse data. In this paper, we show how modifications can be made for the greedy pursuit algorithms so that they use beamformers instead of the standard correlation filter. Using these beamformers, improved performance in the algorithms is obtained. In particular, we discuss beamformers for the average and worst case scenario and give methods for constructing them.


Journal of High Energy Physics | 2011

A dynamical symmetry for supermembranes

Jonas de Woul; Jens Hoppe; Douglas Lundholm; Martin Sundin

A dynamical symmetry for supersymmetric extended objects is given.


IEEE Transactions on Signal Processing | 2016

Relevance Singular Vector Machine for Low-Rank Matrix Reconstruction

Martin Sundin; Cristian R. Rojas; Magnus Jansson; Saikat Chatterjee

We develop Bayesian learning methods for low-rank matrix reconstruction and completion from linear measurements. For under-determined systems, the developed methods reconstruct low-rank matrices when neither the rank nor the noise power is known a priori. We derive relations between the proposed Bayesian models and low-rank promoting penalty functions. The relations justify the use of Kronecker structured covariance matrices in a Gaussian-based prior. In the methods, we use expectation maximization to learn the model parameters. The performance of the methods is evaluated through extensive numerical simulations on synthetic and real data.


european signal processing conference | 2015

Bayesian learning for robust principal component analysis

Martin Sundin; Saikat Chatterjee; Magnus Jansson

We develop a Bayesian learning method for robust principal component analysis where the main task is to estimate a low-rank matrix from noisy and outlier contaminated measurements. To promote low-rank, we use a structured Gaussian prior that induces correlations among column vectors as well as row vectors of the matrix under estimation. In our method, the noise and outliers are modeled by a combined noise model. The method is evaluated and compared to other methods using synthetic data as well as data from the MovieLens 100Kdataset. Comparisons show that the method empirically provides a significant performance improvement over existing methods.


european signal processing conference | 2017

A connectedness constraint for learning sparse graphs

Martin Sundin; Arun Venkitaraman; Magnus Jansson; Saikat Chatterjee

Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.


european signal processing conference | 2016

Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction

Martin Sundin; Saikat Chatterjee; Magnus Jansson

Low-rank matrix reconstruction (LRMR) considers estimation (or reconstruction) of an underlying low-rank matrix from linear measurements. A low-rank matrix can be represented using a factorized model. In this article, we derive Bayesian Cramér-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive Bayesian Cramér-Rao bounds for different scenarios. We consider a low-rank random matrix model with hyper-parameters that are - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing estimation algorithms through numerical simulations.

Collaboration


Dive into the Martin Sundin's collaboration.

Top Co-Authors

Avatar

Magnus Jansson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Saikat Chatterjee

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cristian R. Rojas

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adria Casamitjana

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arun Venkitaraman

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dave Zachariah

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dennis Sundman

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Douglas Lundholm

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jens Hoppe

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonas de Woul

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge