Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mike E. Davies is active.

Publication


Featured researches published by Mike E. Davies.


IEEE Transactions on Signal Processing | 2008

Gradient Pursuits

Thomas Blumensath; Mike E. Davies

Sparse signal approximations have become a fundamental tool in signal processing with wide-ranging applications from source separation to signal acquisition. The ever-growing number of possible applications and, in particular, the ever-increasing problem sizes now addressed lead to new challenges in terms of computational strategies and the development of fast and efficient algorithms has become paramount. Recently, very fast algorithms have been developed to solve convex optimization problems that are often used to approximate the sparse approximation problem; however, it has also been shown, that in certain circumstances, greedy strategies, such as orthogonal matching pursuit, can have better performance than the convex methods. In this paper, improvements to greedy strategies are proposed and algorithms are developed that approximate orthogonal matching pursuit with computational requirements more akin to matching pursuit. Three different directional optimization schemes based on the gradient, the conjugate gradient, and an approximation to the conjugate gradient are discussed, respectively. It is shown that the conjugate gradient update leads to a novel implementation of orthogonal matching pursuit, while the gradient-based approach as well as the approximate conjugate gradient methods both lead to fast approximations to orthogonal matching pursuit, with the approximate conjugate gradient method being superior to the gradient method.


Applied and Computational Harmonic Analysis | 2013

The cosparse analysis model and algorithms

Sangnam Nam; Mike E. Davies; Michael Elad; Rémi Gribonval

Abstract After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments, and provide a detailed study of the model associated with the 2D finite difference analysis operator, a close cousin of the TV norm.


IEEE Journal of Selected Topics in Signal Processing | 2010

Normalized Iterative Hard Thresholding: Guaranteed Stability and Performance

Thomas Blumensath; Mike E. Davies

Sparse signal models are used in many signal processing applications. The task of estimating the sparsest coefficient vector in these models is a combinatorial problem and efficient, often suboptimal strategies have to be used. Fortunately, under certain conditions on the model, several algorithms could be shown to efficiently calculate near-optimal solutions. In this paper, we study one of these methods, the so-called Iterative Hard Thresholding algorithm. While this method has strong theoretical performance guarantees whenever certain theoretical properties hold, empirical studies show that the algorithms performance degrades significantly, whenever the conditions fail. What is more, in this regime, the algorithm also often fails to converge. As we are here interested in the application of the method to real world problems, in which it is not known in general, whether the theoretical conditions are satisfied or not, we suggest a simple modification that guarantees the convergence of the method, even in this regime. With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance. What is more, the modified algorithm retains theoretical performance guarantees similar to the original algorithm.


IEEE Transactions on Information Theory | 2009

Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces

Thomas Blumensath; Mike E. Davies

Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well below the Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. In this paper, we consider a more general signal model and assume signals that live on or close to the union of linear subspaces of low dimension. We present sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters. Contrary to the Nyquist-Shannon sampling theorem, which gives a necessary and sufficient condition for the number of required samples as well as a simple linear algorithm for signal reconstruction, the model studied here is more complex. We therefore concentrate on two aspects of the signal model, the existence of one to one maps to lower dimensional observation spaces and the smoothness of the inverse map. We show that almost all linear maps are one to one when the observation space is at least of the same dimension as the largest dimension of the convex hull of the union of any two subspaces in the model. However, we also show that in order for the inverse map to have certain smoothness properties such as a given finite Lipschitz constant, the required observation dimension necessarily depends logarithmically on the number of subspaces in the signal model. In other words, while unique linear sampling schemes require a small number of samples depending only on the dimension of the subspaces involved, in order to have stable sampling methods, the number of samples depends necessarily logarithmically on the number of subspaces in the model. These results are then applied to two examples, the standard compressed sensing signal model in which the signal has a sparse representation in an orthonormal basis and to a sparse signal model with additional tree structure.


IEEE Transactions on Information Theory | 2012

Rank Awareness in Joint Sparse Recovery

Mike E. Davies; Yonina C. Eldar

This paper revisits the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem is an extension of single channel sparse recovery, which lies at the heart of compressed sensing. Inspired by the links to array signal processing, a new family of MMV algorithms is considered that highlight the role of rank in determining the difficulty of the MMV recovery problem. The simplest such method is a discrete version of MUSIC which is guaranteed to recover the sparse vectors in the full rank MMV setting, under mild conditions. This idea is extended to a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case while also providing guaranteed recovery in the full rank setting. In contrast, popular MMV methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques are shown to be rank blind in terms of worst case analysis. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.


Signal Processing | 2007

Source separation using single channel ICA

Mike E. Davies; Christopher J. James

Many researchers have recently used independent component analysis (ICA) to generate codebooks or features for a single channel of data. We examine the nature of these codebooks and identify when such features can be used to extract independent components from a stationary scalar time series. This question is motivated by empirical work that suggests that single channel ICA can sometimes be used to separate out important components from a time series. Here we show that as long as the sources are reasonably spectrally disjoint then we can identify and approximately separate out individual sources. However, the linear nature of the separation equations means that when the sources have substantially overlapping spectra both identification using standard ICA and linear separation are no longer possible.


IEEE Transactions on Signal Processing | 2009

Dictionary Learning for Sparse Approximations With the Majorization Method

Mehrdad Yaghoobi; Thomas Blumensath; Mike E. Davies

In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [ e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.


Proceedings of the IEEE | 2010

Sparse Representations in Audio and Music: From Coding to Source Separation

Mark D. Plumbley; Thomas Blumensath; Laurent Daudet; Rémi Gribonval; Mike E. Davies

Sparse representations have proved a powerful tool in the analysis and processing of audio signals and already lie at the heart of popular coding standards such as MP3 and Dolby AAC. In this paper we give an overview of a number of current and emerging applications of sparse representations in areas from audio coding, audio enhancement and music transcription to blind source separation solutions that can solve the ¿cocktail party problem.¿ In each case we will show how the prior assumption that the audio signals are approximately sparse in some time-frequency representation allows us to address the associated signal processing task.


IEEE Transactions on Signal Processing | 2009

Parametric Dictionary Design for Sparse Coding

Mehrdad Yaghoobi; Laurent Daudet; Mike E. Davies

This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions. A problem in using these parametric dictionaries is how to choose the parameters. In practice, these parameters have been chosen by an expert or through a set of experiments. In the sparse approximation context, it has been shown that an incoherent dictionary is appropriate for the sparse approximation methods. In this paper, we first characterize the dictionary design problem, subject to a constraint on the dictionary. Then we briefly explain that equiangular tight frames have minimum coherence. The complexity of the problem does not allow it to be solved exactly. We introduce a practical method to approximately solve it. Some experiments show the advantages one gets by using these dictionaries.


IEEE Transactions on Information Theory | 2009

Restricted isometry constants where l p sparse recovery can fail for 0 < p ≤ 1

Mike E. Davies; Rémi Gribonval

This paper investigates conditions under which the solution of an underdetermined linear system with minimal lscrp norm, 0 < p les 1, is guaranteed to be also the sparsest one. Matrices are constructed with restricted isometry constants (RIC) delta2m arbitrarily close to 1/radic2 ap 0.707 where sparse recovery with p = 1 fails for at least one m-sparse vector, as well as matrices with delta2m arbitrarily close to one where lscr1 minimization succeeds for any m-sparse vector. This highlights the pessimism of sparse recovery prediction based on the RIC, and indicates that there is limited room for improving over the best known positive results of Foucart and Lai, which guarantee that lscr1 minimization recovers all m-sparse vectors for any matrix with delta2m < 2(3 - radic2)/7 ap 0.4531. These constructions are a by-product of tight conditions for lscrp recovery (0 les p les 1) with matrices of unit spectral norm, which are expressed in terms of the minimal singular values of 2m-column submatrices. Compared to lscr1 minimization, lscrp minimization recovery failure is shown to be only slightly delayed in terms of the RIC values. Furthermore in this case the minimization is nonconvex and it is important to consider the specific minimization algorithm being used. It is shown that when lscrp optimization is attempted using an iterative reweighted lscr1 scheme, failure can still occur for delta2m arbitrarily close to 1/radic2.

Collaboration


Dive into the Mike E. Davies's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammad Golbabaee

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Marshall

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Di Wu

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunli Guo

University of Edinburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge