Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mehrdad Yaghoobi is active.

Publication


Featured researches published by Mehrdad Yaghoobi.


IEEE Transactions on Signal Processing | 2009

Dictionary Learning for Sparse Approximations With the Majorization Method

Mehrdad Yaghoobi; Thomas Blumensath; Mike E. Davies

In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [ e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.


IEEE Transactions on Signal Processing | 2009

Parametric Dictionary Design for Sparse Coding

Mehrdad Yaghoobi; Laurent Daudet; Mike E. Davies

This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions. A problem in using these parametric dictionaries is how to choose the parameters. In practice, these parameters have been chosen by an expert or through a set of experiments. In the sparse approximation context, it has been shown that an incoherent dictionary is appropriate for the sparse approximation methods. In this paper, we first characterize the dictionary design problem, subject to a constraint on the dictionary. Then we briefly explain that equiangular tight frames have minimum coherence. The complexity of the problem does not allow it to be solved exactly. We introduce a practical method to approximately solve it. Some experiments show the advantages one gets by using these dictionaries.


IEEE Transactions on Signal Processing | 2013

Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

Mehrdad Yaghoobi; Sangnam Nam; Rémi Gribonval; Mike E. Davies

We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterized by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimization framework based on L1 optimization. The reason for introducing a constraint in the optimization framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalized tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimization problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the well-posedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.


international conference on acoustics, speech, and signal processing | 2007

Iterative Hard Thresholding and L0 Regularisation

Thomas Blumensath; Mehrdad Yaghoobi; Mike E. Davies

Sparse signal approximations are approximations that use only a small number of elementary waveforms to describe a signal. In this paper we proof the convergence of an iterative hard thresholding algorithm and show, that the fixed points of that algorithm are local minima of the sparse approximation cost function, which measures both, the reconstruction error and the number of elements in the representation. Simulation results suggest that the algorithm is comparable in performance to a commonly used alternative method.


international conference on acoustics, speech, and signal processing | 2012

Noise aware analysis operator learning for approximately cosparse signals

Mehrdad Yaghoobi; Sangnam Nam; Rémi Gribonval; Mike E. Davies

This paper investigates analysis operator learning for the recently introduced cosparse signal model that is a natural analysis complement to the more traditional sparse signal model. Previous work on such analysis operator learning has relied on access to a set of clean training samples. Here we introduce a new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse. The new model assumes that a p-cosparse signal exists in an epsilon neighborhood of each data point. The operator is assumed to be uniformly normalized tight frame (UNTF) to exclude some trivial operators. In this setting, an alternating optimization algorithm is introduced to learn a suitable analysis operator.


IEEE Signal Processing Letters | 2015

Fast Non-Negative Orthogonal Matching Pursuit

Mehrdad Yaghoobi; Di Wu; Mike E. Davies

One of the important classes of sparse signals is the non-negative signals. Many algorithms have already been proposed to recover such non-negative representations, where greedy and convex relaxed algorithms are among the most popular methods. The greedy techniques have been modified to incorporate the non-negativity of the representations. One such modification has been proposed for Orthogonal Matching Pursuit (OMP), which first chooses positive coefficients and uses a non-negative optimisation technique as a replacement for the orthogonal projection onto the selected support. Beside the extra computational costs of the optimisation program, it does not benefit from the fast implementation techniques of OMP. These fast implementations are based on the matrix factorisations. We here first investigate the problem of positive representation, using pursuit algorithms. We will then describe a new implementation, which can fully incorporate the positivity constraint of the coefficients, throughout the selection stage of the algorithm. As a result, we present a novel fast implementation of the Non-Negative OMP, which is based on the QR decomposition and an iterative coefficients update. We will empirically show that such a modification can easily accelerate the implementation by a factor of ten in a reasonable size problem.


2009 IEEE/SP 15th Workshop on Statistical Signal Processing | 2009

Compressible dictionary learning for fast sparse approximations

Mehrdad Yaghoobi; Mike E. Davies

By solving a linear inverse problem under a sparsity constraint, one can successfully recover the coefficients, if there exists such a sparse approximation for the proposed class of signals. In this framework the dictionary can be adapted to a given set of signals using dictionary learning methods. The learned dictionary often does not have useful structures for a fast implementation, i.e. fast matrix-vector multiplication. This prevents such a dictionary being used for the real applications or large scale problems. The structure can be induced on the dictionary throughout the learning progress. Examples of such structures are shift-invariance and being multi-scale. These dictionaries can be efficiently implemented using a filter bank. In this paper a well-known structure, called compressibility, is adapted to be used in the dictionary learning problem. As a result, the complexity of the implementation of a compressible dictionary can be reduced by wisely choosing a generative model. By some simulations, it has been shown that the learned dictionary provides sparser approximations, while it does not increase the computational complexity of the algorithms, with respect to the pre-designed fast structured dictionaries.


IEEE Transactions on Aerospace and Electronic Systems | 2014

Sparsity-based autofocus for undersampled synthetic aperture radar

Shaun I. Kelly; Mehrdad Yaghoobi; Mike E. Davies

Motivated by the field of compressed sensing and sparse recovery, nonlinear algorithms have been proposed for the reconstruction of synthetic-aperture-radar images when the phase history is undersampled. These algorithms assume exact knowledge of the system acquisition model. In this paper we investigate the effects of acquisition-model phase errors when the phase history is undersampled. We show that the standard methods of autofocus, which are used as a postprocessing step on the reconstructed image, are typically not suitable. Instead of applying autofocus in postprocessing, we propose an algorithm that corrects phase errors during the image reconstruction. The performance of the algorithm is investigated quantitatively and qualitatively through numerical simulations on two practical scenarios where the phase histories contain phase errors and are undersampled.


international conference on acoustics, speech, and signal processing | 2014

A LOW-COMPLEXITY SUB-NYQUIST SAMPLING SYSTEM FOR WIDEBAND RADAR ESM RECEIVERS

Mehrdad Yaghoobi; Michael A. Lexa; Fabien Millioz; Mike E. Davies

The problem of efficient sampling of wideband Radar signals for Electronic Support Measures (ESM) is investigated in this paper. Wideband radio frequency sampling generally needs a sampling rate at least twice the maximum frequency of the signal, i.e. Nyquist rate, which is generally very high. However, when the signal is highly structured, like wideband Radar signals, we can use the fact that signals do not occupy the whole spectrum and instead, there exists a parsimonious structure in the time-frequency domain. Here, we use this fact and introduce a novel low complexity sampling system, which has a recovery guarantee, assuming that received RF signals follow a particular structure. The proposed technique is inspired by the compressive sampling of sparse signals and it uses a multi-coset sampling setting, however it does not involve a computationally expensive reconstruction step. We call this here Low-Complexity Multi-Coset (LoCoMC) sampling technique. Simulation results, show that the proposed sub-Nyquist sampling technique works well in simulated ES scenarios.


2014 Sensor Signal Processing for Defence (SSPD) | 2014

An efficient implementation of the low-complexity multi-coset sub-Nyquist wideband radar electronic surveillance

Mehrdad Yaghoobi; Bernard Mulgrew; Mike E. Davies

The problem of efficient sampling of wideband radar signals for Electronic Surveillance (ES) using a parallel sampling structure will be investigated in this paper. Wideband radio frequency sampling, which is a necessary component of the modern digital radar surveillance systems, needs a sampling rate at least twice the maximum frequency of signals, i.e. Nyquist rate, which is generally very high. Designing an analog to digital converter which works with such a high sampling rate is difficult and expensive. The standard wideband ES receivers use the rapidly swept superheterodyne technique, which selects a subband of the spectrum at a time, while iterating through the whole spectrum sequentially. Such a technique does not explore the underlying structure of input RF signals. When the signal is sparsely structured, we can use the fact that signals do not occupy the whole spectrum. There indeed exists a parsimonious structure in the time-frequency domain in radar ES signals. We here use a recently introduced low-complexity sampling system, called LoCoMC [1], which is inspired by the compressive sampling (CS) of sparse signals and it uses the multi-coset sampling structure, while it does not involve a computationally expensive reconstruction step. A new implementation technique is here introduced, which further reduces the computational cost of the reconstruction algorithm by combining two filters, while improving the accuracy by implicitly implementing an infinite length filter. We also describe the rapidly swept superheterodyne receiver and compare it with the LoCoMC algorithm. In a contrast to the former technique, LoCoMC continuously monitors the spectrum, which makes it much more robust in the short pulse detection.

Collaboration


Dive into the Mehrdad Yaghoobi's collaboration.

Top Co-Authors

Avatar

Mike E. Davies

Pierre-and-Marie-Curie University

View shared research outputs
Top Co-Authors

Avatar

Di Wu

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmed Alzin

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jamie Corr

University of Strathclyde

View shared research outputs
Researchain Logo
Decentralizing Knowledge