Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christine Evers is active.

Publication


Featured researches published by Christine Evers.


international workshop on acoustic signal enhancement | 2014

Multiple source localisation in the spherical harmonic domain

Christine Evers; Alastair H. Moore; Patrick A. Naylor

Spherical arrays facilitate processing and analysis of sound fields with the potential for high resolution in three dimensions in the spherical harmonic domain. Using the captured sound field, robust source localisation systems are required for speech acquisition, speaker tracking and environment mapping. Source localisation becomes a challenging problem in reverberant environments and under noisy conditions, leading to potentially poor performance in cocktail party scenarios. This paper evaluates the performance of a low-complexity localisation approach using spherical harmonics in reverberant environments for multiple speakers. Eigen-beams are used to estimate pseudo-intensity vectors pointing in the direction of sound intensity. This paper proposes a clustering approach in which the intensity vectors of active sound sources and strong reflections are extracted, yielding an estimate of the source direction in azimuth and inclination as an approach to source localisation.


european signal processing conference | 2015

Direction of arrival estimation using pseudo-intensity vectors with direct-path dominance test

Alastair H. Moore; Christine Evers; Patrick A. Naylor; David L. Alon; Boaz Rafaely

The accuracy of direction of arrival estimation tends to degrade under reverberant conditions due to the presence of reflected signal components which are correlated with the direct path. The recently proposed direct-path dominance test provides a means of identifying time-frequency regions in which a single signal path is dominant. By analysing only these regions it was shown that the accuracy of the FS-MUSIC algorithm could be significantly improved. However, for real-time implementation a less computationally demanding localisation algorithm would be preferable. In the present contribution we investigate the direct-path dominance test as a preprocessing step to pseudo-intensity vector-based localisation. A novel formulation of the pseudo-intensity vector is proposed which further exploits the direct path dominance test and leads to improved localisation performance.


international conference on acoustics, speech, and signal processing | 2016

Acoustic simultaneous localization and mapping (A-SLAM) of a moving microphone array and its surrounding speakers

Christine Evers; Alastair H. Moore; Patrick A. Naylor

Acoustic scene mapping creates a representation of positions of audio sources such as talkers within the surrounding environment of a microphone array. By allowing the array to move, the acoustic scene can be explored in order to improve the map. Furthermore, the spatial diversity of the kinematic array allows for estimation of the source-sensor distance in scenarios where source directions of arrival are measured. As sound source localization is performed relative to the array position, mapping of acoustic sources requires knowledge of the absolute position of the microphone array in the room. If the array is moving, its absolute position is unknown in practice. Hence, Simultaneous Localization and Mapping (SLAM) is required in order to localize the microphone array position and map the surrounding sound sources. In realistic environments, microphone arrays receive a convolutive mixture of direct-path speech signals, noise and reflections due to reverberation. A key challenge of Acoustic SLAM (a-SLAM) is robustness against reverberant clutter measurements and missing source detections. This paper proposes a novel bearing-only a-SLAM approach using a Single-Cluster Probability Hypothesis Density filter. Results demonstrate convergence to accurate estimates of the array trajectory and source positions.


IEEE Transactions on Audio, Speech, and Language Processing | 2017

Direction of Arrival Estimation in the Spherical Harmonic Domain Using Subspace Pseudointensity Vectors

Alastair H. Moore; Christine Evers; Patrick A. Naylor

Direction of arrival (DOA) estimation is a fundamental problem in acoustic signal processing. It is used in a diverse range of applications, including spatial filtering, speech dereverberation, source separation and diarization. Intensity vector-based DOA estimation is attractive, especially for spherical sensor arrays, because it is computationally efficient. Two such methods are presented that operate on a spherical harmonic decomposition of a sound field observed using a spherical microphone array. The first uses pseudointensity vectors (PIVs) and works well in acoustic environments where only one sound source is active at any time. The second uses subspace pseudointensity vectors (SSPIVs) and is targeted at environments where multiple simultaneous soures and significant levels of reverberation make the problem more challenging. Analytical models are used to quantify the effects of an interfering source, diffuse noise, and sensor noise on PIVs and SSPIVs. The accuracy of DOA estimation using PIVs and SSPIVs is compared against the state of the art in simulations including realistic reverberation and noise for single and multiple, stationary and moving sources. Finally, robust performance of the proposed methods is demonstrated by using speech recordings in a real acoustic environment.


international conference on digital signal processing | 2015

Bearing-only acoustic tracking of moving speakers for robot audition

Christine Evers; Alastair H. Moore; Patrick A. Naylor; Jonathan Sheaffer; Boaz Rafaely

This paper focuses on speaker tracking in robot audition for human-robot interaction. Using only acoustic signals, speaker tracking in enclosed spaces is subject to missing detections and spurious clutter measurements due to speech inactivity, reverberation and interference. Furthermore, many acoustic localization approaches estimate speaker direction, hence providing bearing-only measurements without range information. This paper presents a probability hypothesis density (PHD) tracker that augments the bearing-only speaker directions of arrival with a cloud of range hypotheses at speaker initiation and propagates the random variates through time. Furthermore, due to their formulation PHD filters explicitly model, and hence provide robustness against, clutter and missing detections. The approach is verified using experimental results.


2007 IEEE/SP 14th Workshop on Statistical Signal Processing | 2007

Block-Based TVAR Models for Single-Channel Blind Dereverberation of Speech from a Moving Speaker

James R. Hopgood; Christine Evers

In reverberant environments, a moving speaker yields a dynamically changing source-sensor geometry giving rise to a spatially-varying acoustic impulse response (AIR) between the source and sensor. Consequently, this leads to a time-varying convolutional relationship between the source signal and the observations and thus spectral colouration of the received signal. It is therefore desirable to reduce the effect of reverberation. In this paper, a model-based approach is proposed for single-channel blind dereverberation of speech from a moving speaker acquired in an acoustic environment. The sound source is modelled by a block-based time-varying AR (TVAR) process, and the channel by a linear time-varying all-pole filter. In each case, the AR parameters are represented as a linear combination of known basis functions with unknown weightings. The speech model captures local nonstationarity while taking account of the global nonstationary characteristics inherent in long segments of speech. As an initial step towards single-channel blind dereverberation of real speech signals, this paper presents simulation results for synthetic data to demonstrate the algorithm developed.


signal processing systems | 2011

Multichannel Online Blind Speech Dereverberation with Marginalization of Static Observation Parameters in a Rao-Blackwellized Particle Filter

Christine Evers; James R. Hopgood

Room reverberation leads to reduced intelligibility of audio signals and spectral coloration of audio signals. Enhancement of acoustic signals is thus crucial for high-quality audio and scene analysis applications. Multiple sensors can be used to exploit statistical evidence from multiple observations of the same event to improve enhancement. Whilst traditional beamforming techniques suffer from interfering reverberant reflections with the beam path, other approaches to dereverberation often require at least partial knowledge of the room impulse response which is not available in practice, or rely on inverse filtering of a channel estimate to obtain a clean speech estimate, resulting in difficulties with non-minimum phase acoustic impulse responses. This paper proposes a multi-sensor approach to blind dereverberation in which both the source signal and acoustic channel are directly estimated from the distorted observations using their optimal estimators. The remaining model parameters are sampled from hypothesis distributions using a particle filter, thus facilitating real-time dereverberation. This approach was previously successfully applied to single-sensor blind dereverberation. In this paper, the single-channel approach is extended to multiple sensors. Performance improvements due to the use of multiple sensors are demonstrated on synthetic and baseband speech examples.


international symposium on circuits and systems | 2008

Blind speech dereverberation using batch and sequential Monte Carlo methods

Christine Evers; James R. Hopgood; Judith Bell

Reverberation and noise cause significant deterioration of audio quality and intelligibility to signals recorded in acoustic environments. Bayesian dereverberation infers knowledge about the system by exploiting the statistical properties of speech and the acoustic channel. In Bayesian frameworks, the signal can be processed either sequentially using online methods or in a batch using offline methods. This paper compares the two approaches for blind speech dereverberation by means of a previously proposed batch approach and a novel sequential approach. Results show that while both methods have different advantages, online processing leads to a more flexible solution.


international conference on acoustics, speech, and signal processing | 2008

Acoustic models for online blind source dereverberation using sequential Monte Carlo methods

Christine Evers; James R. Hopgood; Judith Bell

Reverberation and noise cause significant deterioration of audio quality and intelligibility to signals recorded in acoustic environments. Noise is usually modeled as a common signal observed in the room and independent of room acoustics. However, this simplistic model cannot necessarily capture the effects of separate noise sources at different locations in the room. This paper proposes a noise model that considers distinct noise sources whose individual acoustic impulse responses are separated into source-sensor specific and common acoustical resonances. Further to noise, the signal is distorted by reverberation. Using parametric models of the system, recursive expressions of the filtering distribution can be derived. Based on these results, a sequential Monte Carlo approach for online dereverberation and enhancement is proposed. Simulation results for speech are presented to verify the effectiveness of the model and method.


Archive | 2010

Bayesian Single Channel Blind Dereverberation of Speech from a Moving Talker

James R. Hopgood; Christine Evers; Steven Fortune

This chapter discusses a model-based framework for single-channel blind dereverberation of speech, in which parametric models are used to represent both the unknown source and the unknown acoustic channel. The parameters of the entire model are estimated using the Bayesian paradigm, and an estimate of the source signal is found by either inverse filtering of the observed signal with the estimated channel coefficients, or directly within a sequential framework. Model-based approaches fundamentally rely on the availability of realistic and tractable models that reflect the underlying speech process and acoustic systems. The choice of these models is extremely important and is discussed in detail, with a focus on spatially varying room impulse responses. The mathematical framework and methodology for parameter estimation and dereverberation is also discussed. Some examples of the proposed approaches are presented with results.

Collaboration


Dive into the Christine Evers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boaz Rafaely

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Judith Bell

Heriot-Watt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harald Haas

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David L. Alon

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge