Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian K. Proudler is active.

Publication


Featured researches published by Ian K. Proudler.


Signal Processing | 2007

Frequency invariant beamforming for two-dimensional and three-dimensional arrays

Wei Liu; Stephan Weiss; John G. McWhirter; Ian K. Proudler

A novel method for the design of two-dimensional (2-D) and three-dimensional (3-D) arrays with frequency invariant beam patterns is proposed. By suitable substitutions, the beam pattern of a 2-D or 3-D arrays can be regarded as the 3-D or 4-D Fourier transform of its spatial and temporal parameters. Since frequency invariance can be easily imposed in the Fourier domain, a simple design method is derived. Design examples for the 2-D case are provided.


asilomar conference on signals, systems and computers | 1999

An efficient scheme for broadband adaptive beamforming

Stephan Weiss; Robert W. Stewart; Marion Schabert; Ian K. Proudler

This paper introduces an oversampled subband approach to linearly constrained minimum variance adaptive broadband beamforming. This method is motivated by the considerable reduction in computation over fullband implementation and resulting large computational complexity when fullband beamformers with high spatial and spectral resolution are required. We present the proposed subband adaptive beamformer structure, discuss the advantages and limitations of it, and comment on the correct projection of the constraints in the subband domain. In a simulation, the proposed subband structure is compared to a fullband adaptive beamformer, highlighting the benefit of our method.


IEEE Transactions on Signal Processing | 2006

Exploitation of source nonstationarity in underdetermined blind source separation with advanced clustering techniques

Yuhui Luo; Wenwu Wang; Jonathon A. Chambers; Sangarapillai Lambotharan; Ian K. Proudler

The problem of blind source separation (BSS) is investigated. Following the assumption that the time-frequency (TF) distributions of the input sources do not overlap, quadratic TF representation is used to exploit the sparsity of the statistically nonstationary sources. However, separation performance is shown to be limited by the selection of a certain threshold in classifying the eigenvectors of the TF matrices drawn from the observation mixtures. Two methods are, therefore, proposed based on recently introduced advanced clustering techniques, namely Gap statistics and self-splitting competitive learning (SSCL), to mitigate the problem of eigenvector classification. The novel integration of these two approaches successfully overcomes the problem of artificial sources induced by insufficient knowledge of the threshold and enables automatic determination of the number of active sources over the observation. The separation performance is thereby greatly improved. Practical consequences of violating the TF orthogonality assumption in the current approach are also studied, which motivates the proposal of a new solution robust to violation of orthogonality. In this new method, the TF plane is partitioned into appropriate blocks and source separation is thereby carried out in a block-by-block manner. Numerical experiments with linear chirp signals and Gaussian minimum shift keying (GMSK) signals are included which support the improved performance of the proposed approaches.


international conference on digital signal processing | 2002

Comparing efficient broadband beamforming architectures and their performance trade-offs

Stephan Weiss; Ian K. Proudler

In this paper, we evaluate efficient implementations of a broadband beamforming structure, that permits to project the data onto subspaces defined by the principle components of the array data. This optimum but computationally expensive approach is approximated in the frequency domain by processing in independent frequency bins. The latter is computationally optimal, but suffers from spectral leakage. We show that this problem persists even if the frequency resolution is increased, and that the worst case performance depends on the available degrees for freedoms only. Further, an oversampled subband scheme is proposed, which sacrifices some computational complexity but has a considerably improved and controllable worst case performance.


IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 1996

Formal derivation of a systolic array for recursive least squares estimation

Ian K. Proudler; John G. McWhirter; Marc Moonen; Gerben Hekstra

A systolic array for recursive least squares estimation by inverse updates is derived by means of algorithmic engineering. The derivation of this systolic array is highly nontrivial due to the presence of data contra-flow and feedback loops in the underlying signal flow graph. This would normally prohibit pipelined processing. However, it is shown that suitable delays may be introduced into the signal flow graph by performing a simple algorithmic transformation which compensates for the interference of crossing data flows. The pipelined systolic array is then obtained by retiming the signal flow graph and applying the cut theorem.


international conference on acoustics, speech, and signal processing | 2005

An instrumental variable method for adaptive feedback cancellation in hearing aids

Ann Spriet; Ian K. Proudler; Marc Moonen; Jan Wouters

We propose an instrumental variable method for adaptive feedback cancellation (IV-AFC) in hearing aids that is based on the autoregressive modelling of the desired signal. The IV-AFC offers better feedback suppression for spectrally colored signals than the standard continuous adaptation feedback cancellers. In contrast to a previously proposed prediction error method based feedback canceller, the IV-AFC does not suffer from stability problems when the adaptive feedback canceller is highly time-varying.


IEEE Transactions on Signal Processing | 2003

The KaGE RLS algorithm

Ian D. Skidmore; Ian K. Proudler

A new fast recursive least squares (RLS) algorithm, the Kalman gain estimator (KaGE), is introduced. By making use of RLS interpolation as well as prediction, the algorithm generates the transversal filter weights without suffering the poor numerical attributes of the fast transversal filter (FTF) algorithm. The Kalman gain vector is generated at each time step in terms of interpolation residuals. The interpolation residuals are calculated in an order recursive manner. For an N/sup th/-order problem, the procedure requires O(Nlog/sub 2/N) operations per iteration. This is achieved via a divide-and-conquer approach. Computer simulations suggest that the new algorithm is numerically robust, running successfully for many millions of iterations.


Iet Signal Processing | 2012

Polynomial matrix QR decomposition for the decoding of frequency selective multiple-input multiple-output communication channels

Joanne A. Foster; John G. McWhirter; Sangarapillai Lambotharan; Ian K. Proudler; Martin R. Davies; Jonathon A. Chambers

This study proposes a new technique for communicating over multiple-input multiple-output (MIMO) frequency selective channels. This approach operates by calculating the QR decomposition of the polynomial channel matrix at the receiver on the basis of channel state information, which in this work is assumed to be perfectly known. This then enables the frequency selective MIMO system to be transformed into a set of frequency selective single-input single-output systems without altering the statistical properties of the receiver noise, which can then be individually equalised. A like-for-like comparison with the orthogonal frequency division multiplexing scheme, which is typically used to communicate over channels of this form, is provided. The polynomial matrix system is shown to achieve improved performance in terms of average bit error rate results, as a consequence of time-domain symbol decoding.


IEEE Transactions on Signal Processing | 2001

A conceptual framework for consistency, conditioning, and stability issues in signal processing

James R. Bunch; R.C. Le Borne; Ian K. Proudler

The techniques employed for analyzing algorithms in numerical linear algebra have evolved significantly since the 1940s. Significant in this evolution is the partitioning of the terminology into categories in which analyses involving infinite precision effects are distinguished from analyses involving finite precision effects. Although the structure of algorithms in signal processing prevents the direct application of typical analysis techniques employed in numerical linear algebra, much can be gained in signal processing from an assimilation of the terminology found there. This paper addresses the need for a conceptual framework for discussing the computed solution from an algorithm by focusing on the distinction between a perturbation analysis of a problem or a method of solution and the stability analysis of an algorithm. A consistent approach to defining these concepts facilitates the task of assessing the numerical quality of a computed solution. This paper discusses numerical analysis techniques for signal processing algorithms and suggests terminology that is supportive of a centralized framework for distinguishing between errors propagated by the nature of the problem and errors propagated through the use of finite-precision arithmetic. By this, we mean that the numerical stability analysis of a signal processing algorithm can be simplified and the meaning of such an analysis made unequivocal.


Proceedings of SPIE | 2009

Sub-pixel super-resolution by decoding frames from a reconfigurable coded-aperture camera: theory and experimental verification

Geoffrey Derek De Villiers; Neil T. Gordon; Douglas A. Payne; Ian K. Proudler; Ian D. Skidmore; Kevin D. Ridley; Charlotte R. Bennett; Rebecca Anne Wilson; Christopher W. Slinger

In a previous paper we presented initial results for sub-detector-pixel imaging in the mid-wave infra-red (MWIR) using an imager equipped with a coded-aperture based on a re-configurable MOEMS micro-shutter. It was shown in laboratory experiments that sub-pixel resolution is achievable via this route. The purpose of the current paper is to provide detail on the reconstruction method and to discuss some challenges which arise when imaging real-world scenes. The number of different mask patterns required to achieve a certain degree of super-resolution is also discussed. New results are presented to support the theory.

Collaboration


Dive into the Ian K. Proudler's collaboration.

Top Co-Authors

Avatar

Stephan Weiss

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

James R. Bunch

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann Spriet

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Marc Moonen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge