Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno Fazenda is active.

Publication


Featured researches published by Bruno Fazenda.


Measurement Science and Technology | 2008

Accurate sound source localization in a reverberant environment using multiple acoustic sensors

Hidajat Atmoko; D C Tan; Gui Yun Tian; Bruno Fazenda

This paper introduces a new method for the estimation of sound source distance and direction using at least three microphone sensors in indoor environments. Unlike the other methods that normally use approximations in obtaining the time difference between sensors, this method exploits the existed geometrical relationships of the sensors to form an exact solution to estimating the source position. To overcome reverberation, an enhancing pre-process has been used for different sound sources with different spectra, e.g., single frequency, multiple frequencies and different noise shapes. Source direction and distances are estimated from time of sound wave travel and distances of acoustic sensors. Using the method described in this paper a level of 1° accuracy is obtained. Several experimental tests have been undertaken that verify the results. Conclusions and future work are also described.


Journal of the Acoustical Society of America | 2014

Physical and numerical constraints in source modeling for finite difference simulation of room acoustics.

Jonathan Sheaffer; Maarten van Walstijn; Bruno Fazenda

In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.


Journal of the Acoustical Society of America | 2015

Perceptual thresholds for the effects of room modes as a function of modal decay

Bruno Fazenda; Matthew Stephenson; Andrew Goldberg

Room modes cause audible artifacts in listening environments. Modal control approaches have emerged in scientific literature over the years and, often, their performance is measured by criteria that may be perceptually unfounded. Previous research has shown modal decay as a key perceptual factor in detecting modal effects. In this work, perceptual thresholds for the effects of modes as a function of modal decay have been measured in the region between 32 and 250 Hz. A test methodology has been developed to include modal interaction and temporal masking from musical events, which are important aspects in recreating an ecologically valid test regime. This method has been deployed in addition to artificial test stimuli traditionally used in psychometric studies, which provide unmasked, absolute thresholds. For artificial stimuli, thresholds decrease monotonically from 0.9 s at 32 Hz to 0.17 s at 200 Hz, with a knee at 63 Hz. For music stimuli, thresholds decrease monotonically from 0.51 s at 63 Hz to 0.12 s at 250 Hz. Perceptual thresholds are shown to be dependent on frequency and to a much lesser extent on level. The results presented here define absolute and practical thresholds, which are useful as perceptually relevant optimization targets for modal control methods.


Journal of the Acoustical Society of America | 2016

A metric for predicting binaural speech intelligibility in stationary noise and competing speech maskersa)

Yan Tang; Martin Cooke; Bruno Fazenda; Trevor J. Cox

One criterion in the design of binaural sound scenes in audio production is the extent to which the intended speech message is correctly understood. Object-based audio broadcasting systems have permitted sound editors to gain more access to the metadata (e.g., intensity and location) of each sound source, providing better control over speech intelligibility. The current study describes and evaluates a binaural distortion-weighted glimpse proportion metric-BiDWGP-which is motivated by better-ear glimpsing and binaural masking level differences. BiDWGP predicts intelligibility from two alternative input forms: either binaural recordings or monophonic recordings from each sound source along with their locations. Two listening experiments were performed with stationary noise and competing speech, one in the presence of a single masker, the other with multiple maskers, for a variety of spatial configurations. Overall, BiDWGP with both input forms predicts listener keyword scores with correlations of 0.95 and 0.91 for single- and multi-masker conditions, respectively. When considering masker type separately, correlations rise to 0.95 and above for both types of maskers. Predictions using the two input forms are very similar, suggesting that BiDWGP can be applied to the design of sound scenes where only individual sound sources and their locations are available.


Acta Acustica United With Acustica | 2013

Recreating the sound of Stonehenge

Bruno Fazenda; Ian Drumm

Stonehenge is the largest and most complex ancient stone circle known to mankind. In its original form, the concentric shape of stone rings would have surrounded an individual, both visually and aurally. It is an outdoor space and most archaeological evidence suggests it did not have a roof. However, its large, semi-enclosed structure, with many reflecting surfaces, would have reflected and diffracted sound within the space creating an unusual acoustic field for the Neolithic Man. The work presented here reports the reconstruction of the acoustic sound field of Stonehenge based on measurements taken at a full size replica in Maryhill, USA. Acoustic measurements were carried out using state-of-the-art techniques and the response collected in both mono and B-Format at various source-receiver positions within the space. A brief overview of Energy Time Curves and Reverberation Time together with a comparison to a recent measurement in the current Stonehenge site is provided. The auralisation process presented uses a hybrid Ambisonic and Wave Field Synthesis (WFS) system. In the electro-acoustic rendering system, sound sources are created as focussed sources using Wave Field Synthesis whilst their reverberant counterpart is rendered using Ambisonic principles. Using this novel approach, a realistic acoustic sound field, as it is believed to have existed in the original Stonehenge monument, can be experienced by listeners. The approach presented, not only provides a valuable insight into the acoustic response of an important archaeological site but also demonstrates the development of a useful tool in the archaeological interpretation of important buildings and heritage sites.


quality of multimedia experience | 2016

Perception and automated assessment of audio quality in user generated content: An improved model

Bruno Fazenda; Paul Kendrick; Trevor J. Cox; Francis F. Li; Iain Jackson

Technology to record sound, available in personal devices such as smartphones or video recording devices, is now ubiquitous. However, the production quality of the sound on this user-generated content is often very poor: distorted, noisy, with garbled speech or indistinct music. Our interest lies in the causes of the poor recording, especially what happens between the sound source and the electronic signal emerging from the microphone, and finding an automated method to warn the user of such problems. Typical problems, such as distortion, wind noise, microphone handling noise and frequency response, were tested. A perceptual model has been developed from subjective tests on the perceived quality of such errors and data measured from a training dataset composed of various audio files. It is shown that perceived quality is associated with distortion and frequency response, with wind and handling noise being just slightly less important. In addition, the contextual content of the audio sample was found to modulate perceived quality at similar levels to degradations such as wind and rendering those introduced by handling noise negligible.


Journal of the Acoustical Society of America | 2014

Perception and automatic detection of wind-induced microphone noise

Iain Jackson; Paul Kendrick; Trevor J. Cox; Bruno Fazenda; Francis F. Li

Wind can induce noise on microphones, causing problems for users of hearing aids and for those making recordings outdoors. Perceptual tests in the laboratory and via the Internet were carried out to understand what features of wind noise are important to the perceived audio quality of speech recordings. The average A-weighted sound pressure level of the wind noise was found to dominate the perceived degradation of quality, while gustiness was mostly unimportant. Large degradations in quality were observed when the signal to noise ratio was lower than about 15 dB. A model to allow an estimation of wind noise level was developed using an ensemble of decision trees. The model was designed to work with a single microphone in the presence of a variety of foreground sounds. The model outputted four classes of wind noise: none, low, medium, and high. Wind free examples were accurately identified in 79% of cases. For the three classes with noise present, on average 93% of samples were correctly assigned. A second ensemble of decision trees was used to estimate the signal to noise ratio and thereby infer the perceived degradation caused by wind noise.


Speech Communication | 2016

Evaluating a distortion-weighted glimpsing metric for predicting binaural speech intelligibility in rooms

Yan Tang; Richard J. Hughes; Bruno Fazenda; Trevor Cox

A distortion-weighted glimpse proportion metric (BiDWGP) for predicting binaural speech intelligibility were evaluated in simulated anechoic and reverberant conditions, with and without a noise masker. The predictive performance of BiDWGP was compared to four reference binaural intelligibility metrics, which were extended from the Speech Intelligibility Index (SII) and the Speech Transmission Index (STI). In the anechoic sound field, BiDWGP demonstrated high accuracy in predicting binaural intelligibility for individual maskers (ź ź 0.95) and across maskers (ź ź 0.94). The reference metrics however performed less well in across-masker prediction (0.54 ź ź ź 0.86) despite reasonable accuracy for individual maskers. In reverberant rooms, BiDWGP was more stable in all test conditions (ź ź 0.87) than the reference metrics, which showed different predictive patterns: the binaural STIs were more robust for the stationary than for the fluctuating noise masker, whilst the binaural SII displayed the opposite behaviour. The study shows that the new BiDWGP metric can provide similar or even more robust predictive power than the current standard metrics.


international conference on multimedia and expo | 2013

Wind-induced microphone noise detection - automatically monitoring the audio quality of field recordings

Paul Kendrick; Trevor J. Cox; Francis F. Li; Bruno Fazenda; Iain Jackson

Wind-induced microphone noise is one of the most common problems leading to poor audio quality in recordings. A wind-noise detector could alert the operator of a recording device to the presence of wind noise so that appropriate action can be taken. This paper presents a single channel algorithm which, within the presence of other sounds, detects and classifies wind noise according to level. A large training database is formed from a wind noise simulator which generates an audio stream based on time histories of real wind velocities. A Support Vector Machine detects and classifies according to wind noise level in 25 ms frames which may contain other sounds. Statistical and temporal data from the detector over a sequence of frames is then used to provide estimates for the average wind noise level. The detector is successfully demonstrated on a number of devices with non-simulated data.


Journal of the Acoustical Society of America | 2012

Acoustic condition monitoring of wind turbines: Tip faults

Daniel J. Comboni; Bruno Fazenda

As a significant and growing source of the world’s energy, wind turbine reliability is becoming a major concern. At least two fault detection techniques for condition monitoring of wind turbine blades have been reported in early literature, i.e. acoustic emissions and optical strain sensors. These require off-site measurement. The work presented here offers an alternative non-contact fault detection method based on the noise emission from the turbine during operation. An investigation has been carried out on a micro wind turbine under laboratory conditions. 4 severity levels for a fault have been planted in the form of added weight at the tip of one blade to simulate inhomogeneous debris or ice build up. Acoustic data is obtained at a single microphone placed in front of the rotor. Two prediction methods have been developed and tested on real data: one based on a single feature - rotational frequency spectral magnitude; and another based on a fuzzy logic interference using two inputs - spectral peak and r...

Collaboration


Dive into the Bruno Fazenda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Wankling

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Ball

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iain Jackson

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fengshou Gu

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge