Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony I. Tew is active.

Publication


Featured researches published by Anthony I. Tew.


Journal of the Acoustical Society of America | 1998

Analyzing head-related transfer function measurements using surface spherical harmonics

Michael J. Evans; James A. S. Angus; Anthony I. Tew

A continuous, functional representation of a large set of head-related transfer function measurements (HRTFs) is developed. The HRTFs are represented as a weighted sum of surface spherical harmonics (SSHs) up to degree 17. A Gaussian quadrature method is used to pick out a set of experimentally efficient measurement directions. Anechoic impulse responses are measured for these directions between a source loudspeaker and the entrance to the ear canal of a head-and-torso simulator (HATS). Three separate SSH analyses are carried out: The first forms a SSH representation from the time responses, with the variable onset delay caused by interaural differences intact, by applying the analysis to each time sample in turn. The second SSH model is formed in exactly the same way, except using impulse responses in which the variable onset delays have been equalized. The final SSH analysis is carried out in the frequency domain by applying the technique on a frequency bin by frequency bin basis to the magnitude and un...


international conference on acoustics, speech, and signal processing | 2014

Large Deformation Diffeomorphic Metric Mapping and Fast-Multipole Boundary Element Method provide new insights for Binaural acoustics

Reza Zolfaghari; Nicolas Epain; Craig Jin; Joan Alexis Glaunès; Anthony I. Tew

This paper describes how Large Deformation Diffeomorphic Metric Mapping (LDDMM) can be coupled with a Fast Multipole (FM) Boundary Element Method (BEM) to investigate the relationship between morphological changes in the head, torso, and outer ears and their acoustic filtering (described by Head Related Transfer Functions, HRTFs). The LDDMM technique provides the ability to study and implement morphological changes in ear, head and torso shapes. The FM-BEM technique provides numerical simulations of the acoustic properties of an individuals head, torso, and outer ears. This paper describes the first application of LDDMM to the study of the relationship between a listeners morphology and a listeners HRTFs. To demonstrate some of the new capabilities provided by the coupling of these powerful tools, we morph the shape of a listeners ear, while keeping the torso and head shape essentially constant, and show changes in the acoustics. We validate the methodological framework by mapping the complete morphology of one listener to a target listener and obtaining the target listeners HRTFs. This work utilizes the data provided by the Sydney York Morphological and Acoustic Recordings of Ears (SYMARE) database.


international conference on signal processing and communication systems | 2014

A multiscale LDDMM template algorithm for studying ear shape variations

Reza Zolfaghari; Nicolas Epain; Craig Jin; Anthony I. Tew; Joan Alexis Glaunès

This paper describes a method to establish an average human ear shape across a population of ears by sequentially applying the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework at successively smaller physical scales. Determining such a population average ear shape, also referred to here as a template ear, is an essential step in studying the statistics of ear shapes because it allows the variations in ears to be studied relative to a common template shape. Our interest in the statistics of ear shapes stems from our desire to understand the relationship between ear morphology and the head-related impulse response (HRIR) filters that are essential for rendering 3D audio over headphones. The shape of the ear varies among listeners and is as individualized as a fingerprint. Because the acoustic filtering properties of the ears depend on their shape, the HRIR filters required for rendering 3D audio are also individualized. The contribution of this work is the demonstration of a sequential multiscale approach to creating a population template ear shape using the LDDMM framework. In particular we apply our sequential multiscale algorithm to a small population of synthetic ears in order to analyse its performance given a known reference ear shape.


international conference on acoustics, speech, and signal processing | 2003

Three-dimensional elliptic Fourier methods for the parameterization of human pinna shape

Carl Hetherington; Anthony I. Tew; Yufei Tao

The paper describes an extension to existing work on three-dimensional elliptic Fourier descriptors (Park, K.S. and Lee, N.S., Computers and Biomedical Research, vol.20, p.125-40, 1987) which enables the efficient parameterization of the human pinna shape. The theory and implementation of the new method are discussed and examples of pinna shape parameters given. We describe an application of the method to the estimation of the acoustic pressure response of human pinnae and discuss ongoing work into the parameterization of full head shapes.


international conference on acoustics, speech, and signal processing | 2015

The segregation of spatialised speech in interference by optimal mapping of diverse cues

Jingbo Gao; Anthony I. Tew

We describe optimal cue mapping (OCM), a potentially eal-time binaural signal processing method for segregating sound source in the presence of multiple interfering 3D ound sources. Spatial cues are extracted from a multisource inaural mixture and used to train artificial neural etworks (ANNs) to estimate the spectral energy fraction of wanted speech source in the mixture. Once trained, the NN outputs form a spectral ratio mask which is applied rame-by-frame to the mixture to approximate the agnitude spectrum of the wanted speech. The speech ntelligibility performance of the OCM algorithm for nechoic sound sources is evaluated on previously unseen peech mixtures using the STOI automated measures, and ompared with an established reference method. The ptimized integration of multiple cues offers clear erformance benefits and the ability to quantify the relative mportance of each cue will facilitate computationally fficient implementations.


international conference on acoustics, speech, and signal processing | 2016

Generating a morphable model of ears

Reza Zolfaghari; Nicolas Epain; Craig Jin; Joan Alexis Glaunès; Anthony I. Tew

This paper describes the generation of a morphable model for external ear shapes. The aim for the morphable model is to characterize an ear shape using only a few parameters in order to assist the study of morphoacoustics. The model is derived from a statistical analysis of a population of 58 ears from the SYMARE database. It is based upon the framework of large deformation diffeomorphic metric mapping (LDDMM) and the vector space that is constructed over the space of initial momentums describing the diffeomorphic transformations. To develop a morphable model using the LDDMM framework, the initial momentums are analyzed using a kernel based principal component analysis. In this paper, we examine the ability of our morphable model to construct test ear shapes not included in the principal component analysis.


Neural Computing and Applications | 1997

Simulation results for an innovative point-of-regard sensor using neural networks

Anthony I. Tew

A major obstacle in point-of-regard monitoring for human-computer interaction has been the contaminating effect of head movement. A novel solution to this problem has been simulated. A multilayer perceptron converts the distorted pattern of four infrared sources, reflected in the cornea of the user, into an estimate for the point-of-regard. Using an idealised model of the eye the simulation combined vertical, horizontal, pitch and yaw head movements with eye excursions typical of individuals seated in front of a computer display. Results show that the method is capable of dramatically reducing the effects of small head movements. Under these viewing conditions the error in the point-of-regard estimate exhibited a standard deviation of under 0.5 mm. It is concluded that such a scheme could form an attractive solution to the long-standing problem of head movement artefact.


international conference on acoustics, speech, and signal processing | 2017

Kernel principal component analysis of the ear morphology

Reza Zolfaghari; Nicolas Epain; Craig Jin; Joan Alexis Glaunès; Anthony I. Tew

This paper describes features in the ear shape that change across a population of ears and explores the corresponding changes in ear acoustics. The statistical analysis conducted over the space of ear shapes uses a kernel principal component analysis (KPCA). Further, it utilizes the framework of large deformation diffeomorphic metric mapping and the vector space that is constructed over the space of initial momentums, which describes the diffeomorphic transformations from the reference template ear shape. The population of ear shapes examined by the KPCA are 124 left and right ear shapes from the SYMARE database that were rigidly aligned to the template (population average) ear. In the work presented here we show the morphological variations captured by the first two kernel principal components, and also show the acoustic transfer functions of the ears which are computed using fast multipole boundary element method simulations.


workshop on applications of signal processing to audio and acoustics | 2015

Investigating head-related transfer function smoothing using a sagittal-plane localization model

Laurence J. Hobden; Anthony I. Tew

A new head-related transfer function (HRTF) smoothing algorithm is presented. HRTF magnitude responses are expressed on an equivalent rectangular bandwidth frequency scale and smoothing is increased by progressively discarding the higher frequency Fourier coefficients. A sagittal plane localization model was used to assess the degree of spectral smoothing that can be applied without significant increase in localization error. The results of the localization model simulation were compared with results from a previous perceptual investigation using an algorithm that discards coefficients on a linear frequency scale. Our findings suggest that using a perceptually motivated frequency scale yields similar localization performance using fewer than half the number of coefficients.


Journal of the Acoustical Society of America | 2006

Binaural transformation coding with simulated head tracking

Seiichiro Shoji; Anthony I. Tew

The binaural transformation codec synthesizes generic binaural audio signals and generates accompanying side information. In the decoder the side information is used to personalize the generic signal. The performance of the basic codec is described in another paper; this paper describes the estimated effect on quality of incorporating limited head tracking. Generic HRTFs were used to spatialize two concurrent sound sources spatialized at ±30 or ±80 deg. The generic binaural signal was personalized in the BX decoder and at the same time the individual sound sources were rotated. These induced rotations are equivalent to compensating for head yaw rotations of −80, −40, −20, −10, and −5 deg, and for head pitch rotations of 22.5, 45, and 90 deg. Listening tests based on Recommendation ITU‐R BS.1116‐1 were used to evaluate the processed sounds. The tests were conducted using speech, vocals, guitars, and percussion source materials. It was found that the quality of the processed sound tended to degrade as the h...

Collaboration


Dive into the Anthony I. Tew's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge