Kjersti Engan
University of Stavanger
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kjersti Engan.
IEEE Transactions on Signal Processing | 2005
Shane F. Cotter; Bhaskar D. Rao; Kjersti Engan; Kenneth Kreutz-Delgado
We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NP-hard, many single-measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms-Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)-to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a test-case dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared.
Neural Computation | 2003
Kenneth Kreutz-Delgado; Joseph F. Murray; Bhaskar D. Rao; Kjersti Engan; Te-Won Lee; Terrence J. Sejnowski
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial 25 words or less), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).
IEEE Transactions on Signal Processing | 2003
Bhaskar D. Rao; Kjersti Engan; Shane F. Cotter; Jason A. Palmer; Kenneth Kreutz-Delgado
We develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the focal underdetermined system solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit; sparsity criterion; L-curve. The L-curve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modified L-curve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments.
IEEE Transactions on Signal Processing | 2010
Karl Skretting; Kjersti Engan
We present the recursive least squares dictionary learning algorithm, RLS-DLA, which can be used for learning overcomplete dictionaries for sparse signal representation. Most DLAs presented earlier, for example ILS-DLA and K-SVD, update the dictionary after a batch of training vectors has been processed, usually using the whole set of training vectors as one batch. The training set is used iteratively to gradually improve the dictionary. The approach in RLS-DLA is a continuous update of the dictionary as each training vector is being processed. The core of the algorithm is compact and can be effectively implemented. The algorithm is derived very much along the same path as the recursive least squares (RLS) algorithm for adaptive filtering. Thus, as in RLS, a forgetting factor ¿ can be introduced and easily implemented in the algorithm. Adjusting ¿ in an appropriate way makes the algorithm less dependent on the initial dictionary and it improves both convergence properties of RLS-DLA as well as the representation ability of the resulting dictionary. Two sets of experiments are done to test different methods for learning dictionaries. The goal of the first set is to explore some basic properties of the algorithm in a simple setup, and for the second set it is the reconstruction of a true underlying dictionary. The first experiment confirms the conjectural properties from the derivation part, while the second demonstrates excellent performance.
Signal Processing | 2000
Kjersti Engan; Sven Ole Aase; John Håkon Husøy
This paper consist of two parts. The first part concerns approximation capabilities in using an overcomplete dictionary, a frame, for block coding. A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. We call the technique method of optimal directions (MOD). It is iterative and requires a training set of signal vectors. Experiments demonstrate that the approximation capabilities of the optimized frames are significantly better than those obtained using frames designed by ad hoc techniques or chosen in an ad hoc fashion. Experiments show typical reduction in mean squared error (MSE) by 30–80% for speech and electrocardiogram (ECG) signals. The second part concerns a complete compression scheme using a set of optimized frames, and evaluates both the use of fixed size and variable size frames. A signal compression scheme using frames optimized with the MOD technique is proposed. The technique, called multi-frame compression (MFC) uses several different frames, each optimized for a fixed number of selected frame vectors in each approximation. We apply the MOD and the MFC scheme to ECG signals. The coding results are compared with results obtained when using transform-based compression schemes like the discrete cosine transform (DCT) in combination with run-length and entropy coding. The experiments demonstrate improved rate-distortion performance by 2–4 dB for the MFC scheme when compared to the DCT at low bit-rates. They also show that variable sized frames in the compression scheme perform better than fixed sized frames.
Digital Signal Processing | 2007
Kjersti Engan; Karl Skretting; John Håkon Husøy
The use of overcomplete dictionaries, or frames, for sparse signal representation has been given considerable attention in recent years. The major challenges are good algorithms for sparse approximations, i.e., vector selection algorithms, and good methods for choosing or designing dictionaries/frames. This work is concerned with the latter. We present a family of iterative least squares based dictionary learning algorithms (ILS-DLA), including algorithms for design of signal dependent block based dictionaries and overlapping dictionaries, as generalizations of transforms and filter banks, respectively. In addition different constraints can be included in the ILS-DLA, thus we present different constrained design algorithms. Experiments show that ILS-DLA is capable of reconstructing (most of) the generating dictionary vectors from a sparsely generated data set, with and without noise. The dictionaries are shown to be useful in applications like signal representation and compression where experiments demonstrate that our ILS-DLA dictionaries substantially improve compression results compared to traditional signal expansions such as transforms and filter banks/wavelets.
international symposium on circuits and systems | 1999
Kjersti Engan; Sven Ole Aase; John Håkon Husøy
The method of optimal directions (MOD) is an iterative method for designing frames for sparse representation purposes using a training set. In this paper we use frames designed by MOD in a multiframe compression (MFC) scheme. Both the MOD and the MFC need a vector selection algorithm, and orthogonal matching pursuit (OMP) is used in this paper. In the MFC scheme several different frames are used, each optimized for a fixed number of selected frame vectors in each approximation. We apply the MOD and the MFC scheme to ECG signals, and do experiments with both fixed size and variable size on the different frames used in the MFC scheme. Compared to traditional transform based compression, the experiments demonstrate improved rate-distortion performance by 1-4 dB, and that variable sized frames perform better than fixed sized frames.
international conference on acoustics, speech, and signal processing | 2011
Karl Skretting; Kjersti Engan
The recently presented recursive least squares dictionary learning algorithm (RLS-DLA) is tested in a general image compression application. Dictionaries are learned in the pixel domain and in the 9/7 wavelet domain, and then tested in a straightforward compression scheme. Results are compared with state-of-the-art compression methods. The proposed compression scheme using RLS-DLA learned dictionaries in the 9/7 wavelet domain performs better than using dictionaries learned by other methods. The compression rate is just below the JPEG-2000 rate which is promising considering the simple entropy coding used.
IEEE Transactions on Signal Processing | 2001
Sven Ole Aase; John Håkon Husøy; J. H. H. K. Skretting; Kjersti Engan
Traditional signal decompositions such as transforms, filterbanks, and wavelets generate signal expansions using the analysis-synthesis setting: the expansion coefficients are found by taking the inner product of the signal with the corresponding analysis vector. In this paper, we try to free ourselves from the analysis-synthesis paradigm by concentrating on the synthesis or reconstruction part of the signal expansion. Ignoring the analysis issue completely, we construct sets of synthesis vectors, which are denoted waveform dictionaries, for efficient signal representation. Within this framework, we present an algorithm for designing waveform dictionaries that allow sparse representations: the objective is to approximate a training signal using a small number of dictionary vectors. Our algorithm optimizes the dictionary vectors with respect to the average nonlinear approximation error, i.e., the error resulting when keeping a fixed number n of expansion coefficients but not necessarily the first n coefficients. Using signals from a Gaussian, autoregressive process with correlation factor 0.95, it is demonstrated that for established signal expansions like the Karhunen-Loeve transform, the lapped orthogonal transform, and the biorthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by fine tuning of the expansion vectors.
international conference on acoustics speech and signal processing | 1998
Kjersti Engan; Sven Ole Aase; John Håkon Husøy
A technique for designing frames to use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. An MP algorithm chooses frame vectors to approximate each training vector. Each vector in the frame is then adjusted by using the residuals for the training vectors which used that particular frame vector in their expansion. The frame design algorithm is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean square error (MSE), of the optimized frames are significantly better than those found using frames designed by ad hoc techniques. Experiments show typical reduction in MSE by 20-50%.