Paul Kendrick
University of Salford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Kendrick.
Journal of the Acoustical Society of America | 1998
Wing Law; Carl W. Hennige; F. J. Fry; Narendra T. Sanghvi; Fred Miller; Paul Kendrick; Stan DeMarta
A High Intensity Focused Ultrasound system is provided for treatment of focal disease. The preferred embodiment includes an intracavity probe having two active ultrasound radiating surfaces with different focal geometries. Selectively energizing the first surface focuses therapeutic energy a first distance from the housing, energizing the second surface focuses therapeutic energy nearer the housing. Preferably, the probe includes a thin, flexible, inelastic membrane which is rigidized with pressure to allow blunt manipulation of tissue. Methods of use of the probe are also provided, particularly for treatment of diseases of the prostate.
IEEE Signal Processing Letters | 2007
Yonggang Zhang; Jonathon A. Chambers; Saeid Sanei; Paul Kendrick; Trevor J. Cox
This letter proposes a new variable tap-length least-mean-square (LMS) algorithm for applications in which the unknown filter impulse response sequence has an exponential decay envelope. The algorithm is designed to minimize the mean-square deviation (MSD) between the optimal and adaptive filter weight vectors at each iteration. Simulation results show the proposed algorithm has a faster convergence rate as compared with the fixed tap-length LMS algorithm and is robust to the initial tap-length choice
Journal of the Acoustical Society of America | 2008
Paul Kendrick; Trevor J. Cox; Francis F. Li; Yonggang Zhang; Jonathon A. Chambers
This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.
international conference on acoustics, speech, and signal processing | 2007
Yonggang Zhang; Jonathon A. Chambers; Wenwu Wang; Paul Kendrick; Trevor J. Cox
A new variable step-size least-mean-square (VSSLMS) algorithm is presented in this paper for applications in which the desired response contains nonstationary noise with high variance. The step size of the proposed VSSLMS algorithm is controlled by the normalized square Euclidean norm of the averaged gradient vector, and is henceforth referred to as the NSVSSLMS algorithm. As shown by the analysis and simulation results, the proposed algorithm has both fast convergence rate and robustness to high-variance noise signals, and performs better than Greenburgs sum method, which is a robust algorithm for applications with nonstationary noise.
international conference on acoustics, speech, and signal processing | 2006
Paul Kendrick; Trevor J. Cox; Yonggang Zhang; Jonathon A. Chambers; Francis F. Li
A new method, employing machine learning techniques and a modified low frequency envelope spectrum estimator, for estimating important room acoustic parameters including reverberation time (RT) and early decay time (EDT) from received music signals has been developed. It overcomes drawbacks found in applying music signals directly to the envelope spectrum detector developed for the estimation of RT from speech signals. The octave band music signal is first separated into sub bands corresponding to notes on the equal temperament scale and the level of each note normalised before applying an envelope spectrum detector. A typical artificial neural network is then trained to map these envelope spectra onto RT or EDT. Significant improvements in estimation accuracy were found and further investigations confirmed that the non-stationary nature of music envelopes is a major technical challenge hindering accurate parameter extraction from music and the proposed method to some extent circumvents the difficulty
quality of multimedia experience | 2016
Bruno Fazenda; Paul Kendrick; Trevor J. Cox; Francis F. Li; Iain Jackson
Technology to record sound, available in personal devices such as smartphones or video recording devices, is now ubiquitous. However, the production quality of the sound on this user-generated content is often very poor: distorted, noisy, with garbled speech or indistinct music. Our interest lies in the causes of the poor recording, especially what happens between the sound source and the electronic signal emerging from the microphone, and finding an automated method to warn the user of such problems. Typical problems, such as distortion, wind noise, microphone handling noise and frequency response, were tested. A perceptual model has been developed from subjective tests on the perceived quality of such errors and data measured from a training dataset composed of various audio files. It is shown that perceived quality is associated with distortion and frequency response, with wind and handling noise being just slightly less important. In addition, the contextual content of the audio sample was found to modulate perceived quality at similar levels to degradations such as wind and rendering those introduced by handling noise negligible.
Journal of the Acoustical Society of America | 2014
Iain Jackson; Paul Kendrick; Trevor J. Cox; Bruno Fazenda; Francis F. Li
Wind can induce noise on microphones, causing problems for users of hearing aids and for those making recordings outdoors. Perceptual tests in the laboratory and via the Internet were carried out to understand what features of wind noise are important to the perceived audio quality of speech recordings. The average A-weighted sound pressure level of the wind noise was found to dominate the perceived degradation of quality, while gustiness was mostly unimportant. Large degradations in quality were observed when the signal to noise ratio was lower than about 15 dB. A model to allow an estimation of wind noise level was developed using an ensemble of decision trees. The model was designed to work with a single microphone in the presence of a variety of foreground sounds. The model outputted four classes of wind noise: none, low, medium, and high. Wind free examples were accurately identified in 79% of cases. For the three classes with noise present, on average 93% of samples were correctly assigned. A second ensemble of decision trees was used to estimate the signal to noise ratio and thereby infer the perceived degradation caused by wind noise.
Journal of the Acoustical Society of America | 2014
Jonathan A. Hargreaves; Paul Kendrick; Sabine von Hünerbein
This paper describes a numerical method for simulating far-field scattering from small regions of inhomogeneous temperature fluctuations. Such scattering is of interest since it is the mechanism by which acoustic wind velocity profiling devices (Doppler SODAR) receive backscatter. The method may therefore be used to better understand the scattering mechanisms in operation and may eventually provide a numerical test-bed for developing improved SODAR signals and post-processing algorithms. The method combines an analytical incident sound model with a k-space model of the scattered sound close to the inhomogeneous region and a near-to-far-field transform to obtain far-field scattering patterns. Results from two test case atmospheres are presented: one with periodic temperature fluctuations with height and one with stochastic temperature fluctuations given by the Kolmogorov spectrum. Good agreement is seen with theoretically predicted far-field scattering and the implications for multi-frequency SODAR design are discussed.
international conference on multimedia and expo | 2013
Paul Kendrick; Trevor J. Cox; Francis F. Li; Bruno Fazenda; Iain Jackson
Wind-induced microphone noise is one of the most common problems leading to poor audio quality in recordings. A wind-noise detector could alert the operator of a recording device to the presence of wind noise so that appropriate action can be taken. This paper presents a single channel algorithm which, within the presence of other sounds, detects and classifies wind noise according to level. A large training database is formed from a wind noise simulator which generates an audio stream based on time histories of real wind velocities. A Support Vector Machine detects and classifies according to wind noise level in 25 ms frames which may contain other sounds. Statistical and temporal data from the detector over a sequence of frames is then used to provide estimates for the average wind noise level. The detector is successfully demonstrated on a number of devices with non-simulated data.
Journal of the Acoustical Society of America | 2017
Paul Kendrick; Michael D. Wood; Luciana Barçante
This paper presents a method for the automated acoustic assessment of bird vocalization activity using a machine learning approach. Acoustic biodiversity assessment methods use statistics from vocalizations of various species to infer information about the biodiversity. Manual annotations are accurate but time-consuming and therefore expensive, so automated assessment is desirable. Acoustic Diversity indices are sometimes used. These are computed directly from the audio and comparison between environments can provide insight about the ecologies. However, the abstract nature of the indices means that solid conclusions are difficult to reach and methods suffers from sensitivity to confounding factors such as noise. Machine learning based methods are potentially are more powerful because they can be trained to detect and identify species directly from audio. However, these algorithms require large quantities accurately labeled training data, which is, as already mentioned, non-trivial to acquire. In this wor...