Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Burns.
Journal of the Acoustical Society of America | 1994
Thomas Burns; William Forde Thompson; Courtney B. Burroughs; Kent Eschenburg
Knowledge of the potential and kinetic energy densities, and the complex intensity vector is needed to characterize an acoustic field. Methods are described to obtain and visualize these quantities. Two independent techniques were combined to determine these acoustic field variables, namely, an acoustic intensity measurement technique and near‐field acoustic holography (NAH). In the past, conventional acoustic intensity techniques have been used to estimate radiated power and, more recently, to localize and characterize sources. In this study, an acoustic intensity technique was used to obtain, indirectly, the complex pressure on a hologram plane in the near field of a source from a single broadband intensity measurement. From knowledge of the complex pressure on the hologram plane, the free‐field was reconstructed using NAH [Loyau et al., J. Acoust. Soc. Am. 84, 1744–1750 (1988)]. After a brief review of these measurement techniques, various methods of visualizing both the steady‐state and time‐varying a...
Journal of the Acoustical Society of America | 2017
Thomas Burns
Assistive listening applications can be grouped into two categories: public announcements and live reinforcement. Each has unique needs. For public announcements, system temporal latencies of hundreds of milliseconds are irrelevant. For live reinforcement, temporal latencies are much more critical—especially in large venues where the assistive signal must be synchronized with the visual presentation and acoustic performance. If the assistive signal is recorded with the intent to preserve stereo imaging of the performance, binaural latencies are also critical; one millisecond can produce severe comb filtering while tens of milliseconds can produce echo. Digital audio streams take time to encode, transmit, receive, decode, and present to the user. While reviewing the technical requirements for digital audio streaming in this application, and to demo the effect of latency offsets, a G.722 wideband stream will be broadcast within the room to a handful of binaural devices available to the earliest attendees.
Journal of the Acoustical Society of America | 2015
Gabe Murray; Thomas Burns
This presentation describes the redesign of an existing multi-purpose practice room used by high school concert bands, drumlines, and choral groups. Resource and cost restrictions limited the redesign and evaluation process for surface finishes only. The existing space had minimal early reflections and reverberation due to copious amounts of thick acoustic wall panels, resulting in a dead sound for choral and band use. The biggest challenge was to reposition the wall panels by distributing them in a checkerboard pattern, thereby balancing them with the reflective surfaces of the painted cinderblock underneath. In addition, one third of the absorptive suspended ceiling tile was replaced with 5/8” gypsum panels, and one third of the ceiling tile was removed altogether in order to couple the interstitial space above the suspended ceiling and increase the effective volume of the space. Odeon was used to create auralizations of various configurations by convolving anechoic recordings of a snare drum with the c...
Journal of the Acoustical Society of America | 2015
Thomas Burns
For hearing aids, the directivity index is a benchmark defined under two acoustical conditions that a hearing-aid user won’t encounter, namely, the ratio of sound power signal arriving from the on-axis target in an anechoic condition to the isotropic spherical noise. It would be useful to benchmark the speech signal to noise that is encountered in a typical environment. The purpose of this study is to map the instantaneous acoustic intensity in a room using a head and torso voice simulator as the source and a regular tetrahedron microphone array in the field. Four impulse responses from a small tetrahedron were measured in a reverberant conference room of gypsum walls, carpet, and absorptive ceiling tile. Welch’s method was used to compute one-third octave estimates of the auto- and cross-spectra from the impulse responses, and these spectra were used to estimate the steady-state, 3D instantaneous acoustic intensity vector via the time-averaged active intensity and the maximum amplitude of the reactive in...
Journal of the Acoustical Society of America | 2013
Thomas Burns
The three critical factors for providing stable directional performance for typical microphone arrays used in hearing aids include the relative sensitivity and phase between the microphones in addition to the placement of the hearing instrument behind the users ear. A directional system is robust if these factors can operate over a wide range of levels without degrading the directional performance. In this study, dual dipole microphones were arranged symmetrically around an omnidirectional microphone such that all inlets were collinear. Compared to an endfire array, whether it be a delay-and-sum or Blumlein configuration, this dual-dipole-omni array is remarkably more robust, yielding very little degradation in the Directivity Index for the aforementioned critical factors varying as much as +/− 3 dB, +/− 30 ms, and the directional axis of the hearing instrument varying +/− 20 degrees on the ear.
Journal of the Acoustical Society of America | 2012
Thomas Burns
An endfire microphone array uses two omnidirectional microphones in a delay-and-sum configuration. A Blumlein array mixes one omnidirectional and one (bi)directional mic. Each can be engineered to provide any 1st order directional pattern. The three critical factors for providing good directionality include the relative sensitivity and phase between the microphones in addition to the placement of the hearing instrument on the user’s head. In this context, a directional system is robust if its factors can operate over a wide range of levels without degrading the directional performance. In this study, each array was engineered to have the same aperture spacing and tuned to the same freefield polar pattern; this tuning provided the nominal operating levels. Both arrays were placed in-situ on a measurement manikin and 614 impulse responses were acquired in ten degree resolution on all four microphones for different in-situ positions. The data for each array were combined as described above, and the aforement...
Journal of the Acoustical Society of America | 2011
Thomas Burns
The optimal operating parameters for a directional microphone array worn in situ are not necessarily equivalent to the optimal parameters while operating in the absence of head and torso related scattering. These parameters include the relative magnitude and phase of the microphones and their positional placement on the head, characterized as factors, operating over a range of levels, characterized by their production spread and susceptibility to drift. The goal is to understand how these factors operating over their levels contribute to the in-situ directional responses on a measurement manikin, characterized by the directivity index and the unidirectional index. Using 614 impulse responses acquired in ten deg resolution on the manikin, a simple central composite design of experiments was conducted to fit a quadratic polynomial and generate a response surface to the aforementioned directional indices, thereby leading to the critical first-order and two factor interactions of the system. The interactions,...
Journal of the Acoustical Society of America | 2010
Matthew Green; Thomas Burns
Partial or complete occlusion of the ear canal by an assistive listening device results in an unnaturalness in the sound of a person’s own voice, known as the occlusion effect. When the occlusion effect is present, if the sound presented to the eardrum is dominated by the amplified output of the assistive listening device, then the frequency response of the device could be adjusted in order to minimize the perceived effects of occlusion. Such an alteration of the frequency response must be applied during self‐vocalization and at no other time. A robust and accurate method for detecting self‐vocalization will be presented, consisting of a MEMS accelerometer and a cross‐correlation based signature detection algorithm. The requirements for signature capture and selection will be discussed, as well as detection performance by gender. Results for detection accuracy, immunity to tasks other than self‐vocalization, and binaural agreement will be presented.
Journal of the Acoustical Society of America | 2010
Thomas Burns; Dave Tourtelotte
Electromagnetic balanced armature receivers [Hunt, Electroacoustics, Chap. 7] are used exclusively to generate acoustic output in hearing aids. These transducers are much more efficient than electrodynamic transducers and are capable of delivering upward of 140 dB of sound pressure to person. In an effort to maximize system gain in a hearing aid, vibroacoustical feedback paths originating from the receiver are modeled using finite elements. Given an electrical excitation, the electromagnetic‐mechanical force on the armature is solved as a function of frequency. The force on this armature vibrates an internal diaphragm, which generates acoustic output while vibrating the entire hearing aid. Assuming that there are no acoustical leaks in the design, vibroacoustical coupling limits the usable gain of the aid. Using commercially available software, the fluid is modeled with full Navier–Stokes elements and is coupled to all structural boundaries. The armature is “kicked” with the aforementioned force, and spec...
Journal of the Acoustical Society of America | 2010
Thomas Burns
A dual microphone endfire array is a directional system commonly used in hearing aids. The directional performance of such systems are sensitive to sensor mismatch and drift. In this study, a pair of matched omnidirectional microphones in a delay‐and‐sum configuration are mounted in a hearing aid and their directional response is measured in‐situ on KEMAR at 10 resolution in azimuth and elevation. The resulting 3‐D polar balloons, directivity indices, and unidirectional indices are computed as a function of frequency. The measured transfer functions are then perturbed with sensor mismatch responses acquired empirically from typical lots of hearing‐aid microphones. The resulting polar benchmarks are evaluated and compared to the original benchmarks. A detailed analysis, both visual and numerical, will be presented.