Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Traer is active.

Publication


Featured researches published by James Traer.


Journal of Geophysical Research | 2014

A unified theory of microseisms and hum

James Traer; Peter Gerstoft

Interacting ocean surface waves force water column pressure fluctuations with spectral peaks at the same frequencies of primary microseisms (PM), double-frequency microseisms (DF), and seismic hum. Prior treatment of nonlinear ocean wave interactions has focused on the DF pressure fluctuations which, in the presence of opposing waves, do not decay with depth and hence are dominant in deep waters. For an arbitrary 2-D surface wave spectrum we integrate over all pairings of wave vectors, directions, and frequencies to obtain a full-spectrum perturbation expansion including the first- and second-order pressure waves. First-order pressure waves generate a peak at PM frequencies and second-order pressure waves generated by obliquely interacting surface waves generate pressure fluctuations at DF and hum frequencies. These pressure fluctuations decay with depth but interact with the seabed in shallow water. As their generation does not require precise wave states, they are likely ubiquitous in shallow water.


Journal of the Acoustical Society of America | 2014

Compressive geoacoustic inversion using ambient noise

Caglar Yardim; Peter Gerstoft; William S. Hodgkiss; James Traer

Surface generated ambient noise can be used to infer sediment properties. Here, a passive geoacoustic inversion method that uses noise recorded by a drifting vertical array is adopted. The array is steered using beamforming to compute the noise arriving at the array from various directions. This information is used in two different ways: Coherently (cross-correlation of upward/downward propagating noise using a minimum variance distortionless response fathometer), and incoherently (bottom loss vs frequency and angle using a conventional beamformer) to obtain the bottom properties. Compressive sensing is used to invert for the number of sediment layer interfaces and their depths using coherent passive fathometry. Then the incoherent bottom loss estimate is used to refine the sediment thickness, sound speed, density, and attenuation values. Compressive sensing fathometry enables automatic determination of the number of interfaces. It also tightens the sediment thickness priors for the incoherent bottom loss inversion which reduces the search space. The method is demonstrated on drifting array data collected during the Boundary 2003 experiment.


Journal of the Acoustical Society of America | 2010

Ocean bottom profiling with ambient noise: A model for the passive fathometer

James Traer; Peter Gerstoft; William S. Hodgkiss

A model is presented for the complete passive fathometer response to ocean surface noise, interfering discrete noise sources, and locally uncorrelated noise in an ideal waveguide. The leading order term of the ocean surface noise contribution produces the cross-correlation of vertical multipaths and yields the depth of sub-bottom reflectors. Discrete noise incident on the array via multipaths give multiple peaks in the fathometer response. These peaks may obscure the sub-bottom reflections but can be attenuated with use of minimum variance distortionless response (MVDR) steering vectors. The seabed critical angle introduces discontinuities in the spatial distribution of distant surface noise and may introduce spurious peaks in the passive fathometer response. These peaks can be attenuated by beamforming within a bandwidth limited by the array geometry and critical angle.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Statistics of natural reverberation enable perceptual separation of sound and space

James Traer; Joshua H. McDermott

Significance Sounds produced in the world reflect off surrounding surfaces on their way to our ears. Known as reverberation, these reflections distort sound but provide information about the world around us. We asked whether reverberation exhibits statistical regularities that listeners use to separate its effects from those of a sound’s source. We conducted a large-scale statistical analysis of real-world acoustics, revealing strong regularities of reverberation in natural scenes. We found that human listeners can estimate the contributions of the source and the environment from reverberant sound, but that they depend critically on whether environmental acoustics conform to the observed statistical regularities. The results suggest a separation process constrained by knowledge of environmental acoustics that is internalized over development or evolution. In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.


Journal of the Acoustical Society of America | 2011

Coherent averaging of the passive fathometer response using short correlation time

James Traer; Peter Gerstoft

The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004. The passive fathometer response was computed with correlation times from 0.34 to 90 s and, for correlation times less than a few seconds, the observed signal-to-noise ratio (SNR) agrees with a 1D model of SNR of the passive fathometer response in an ideal waveguide. In the 2004 experiment, the fathometer response showed the array depth varied periodically with an amplitude of 1 m and a period of 7 s consistent with wave driven motion of the array. This introduced a destructive interference, which prevents the SNR growing with increasing correlation time. A peak-tracking algorithm applied to the fathometer response of experimental data was used to remove this motion allowing the coherent passive fathometer response to be averaged over several minutes without destructive interference. Multirate adaptive beamforming, using 90 s correlation time to form adaptive steer vectors which were applied to 0.34 s data snapshots, increases the SNR of the passive fathometer response.


Attention Perception & Psychophysics | 2017

Headphone screening to facilitate web-based auditory experiments

Kevin Woods; Max H. Siegel; James Traer; Joshua H. McDermott

Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants but sacrifice control over sound presentation and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining whether online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing.


eLife | 2016

Avoiding a lost generation of scientists

Justin Q. Taylor; Peter Kovacik; James Traer; Philip Javier Zakahi; Christine Oslowski; Alik S. Widge; Christin A. Glorioso

By sharing their experiences, early-career scientists can help to make the case for increased government funding for researchers.


Journal of the Acoustical Society of America | 2018

Human recognition of environmental sounds is not always robust to reverberation

James Traer; Josh H. McDermott

Reverberation is ubiquitous in natural environments, but its effect on the recognition of non-speech sounds is poorly documented. To evaluate human robustness to reverberation, we measured its effect on the recognizability of everyday sounds. Listeners identified a diverse set of recorded environmental sounds (footsteps, animal vocalizations, vehicles moving, hammering, etc.) in an open set recognition task. For each participant, half of the sounds (randomly assigned) were presented in reverberation. We found the effect of reverberation to depend on the typical listening conditions for a sound. Sounds that are typically loud and heard in indoor environments, and which thus should often be accompanied by reverberation, were recognized robustly, with only a small impairment for reverberant conditions. In contrast, sounds that are either typically quiet or typically heard outdoors, for which reverberation should be less pronounced, produced a large recognition decrement in reverberation. These results demons...


Journal of the Acoustical Society of America | 2018

Human inference of force from impact sounds: Perceptual evidence for inverse physics

James Traer; Josh H. McDermott

An impact sound is determined both by material properties of the objects involved (e.g., mass, density, shape, and rigidity) and by the force of the collision. Human listeners can typically estimate the force of an impact as well as the material which has been struck. To investigate the underlying auditory mechanisms we played listeners audio recordings of two boards being struck and measured their ability to identify the board struck with more force. Listeners significantly outperformed models based on simple acoustic features (e.g., signal power or spectral centroid). We repeated the experiment with synthetic sounds generated from simulated object resonant modes and simulated contact forces derived from a spring model. Listeners could not distinguish synthetic from real recordings and successfully estimated simulated impact force. When the synthetic modes were altered (e.g., to simulate a harder material) listeners altered their judgments of both material and impact force, consistent with the physical i...


Journal of the Acoustical Society of America | 2017

Investigating audition with a generative model of impact sounds

James Traer; Josh H. McDermott

When objects collide they vibrate and emit sound. Physical laws govern these collisions and subsequent vibrations. As a result, sound contains information about objects (density/hardness/size/shape), and the manner in which they collide (bouncing/rolling/scraping). Everyday experience suggests that human listeners have some ability to discern material and kinematics from impact sounds. However, the accuracy of these perceptual inferences remains unclear, and the underlying mechanisms are uncharacterized. Listeners could rely on stored templates for particular familiar objects. Alternatively, we could infer generative parameters for a sound via probabilistic inference in an internal model of the generative process. To explore these possibilities we constructed a generative model of impact sounds, modeling sounds as the convolution of a time-varying impact force with the impulse responses (IRs) of two objects. The force was modeled as a function of mass, hardness and impact velocity. IRs were measured from ...

Collaboration


Dive into the James Traer's collaboration.

Top Co-Authors

Avatar

Peter Gerstoft

University of California

View shared research outputs
Top Co-Authors

Avatar

Josh H. McDermott

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua H. McDermott

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caglar Yardim

University of California

View shared research outputs
Top Co-Authors

Avatar

Christin A. Glorioso

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christine Oslowski

Bridgewater State University

View shared research outputs
Top Co-Authors

Avatar

David P. Knobles

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jiajun Wu

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge