Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michelle C. Vigeant is active.

Publication


Featured researches published by Michelle C. Vigeant.


Journal of the Acoustical Society of America | 2008

Investigations of Incorporating Source Directivity Into Room Acoustics Computer Models to Improve Auralizations

Michelle C. Vigeant

Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine, first, its effect in room acoustics computer models and, second, how to better incorporate the directional source characteristics to improve auralizations. The room acoustics computer model was initially validated in terms of accurately incorporating the input source directivity. The next study demonstrated that using directional sources over an omnidirectional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. A recently proposed technique for creating auralizations using multichannel anechoic recordings has been examined with numerous subjective studies, applied to both solo instruments and an orchestra. Through these studies, this process was shown to be effective in terms o...


Cochlear Implants International | 2015

Reverberation negatively impacts musical sound quality for cochlear implant users

Alexis T. Roy; Michelle C. Vigeant; Tina Munjal; Courtney Carver; Patpong Jiradejvong; Charles J. Limb

Abstract Objective Satisfactory musical sound quality remains a challenge for many cochlear implant (CI) users. In particular, questionnaires completed by CI users suggest that reverberation due to room acoustics can negatively impact their music listening experience. The objective of this study was to more specifically characterize of the effect of reverberation on musical sound quality in CI users, normal hearing (NH) non-musicians, and NH musicians using a previously designed assessment method, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). Methods In this method, listeners were randomly presented with an anechoic musical segment and five-versions of this segment in which increasing amounts of reverberation were artificially added. Participants listened to the six reverberation versions and provided sound quality ratings between 0 (very poor) and 100 (excellent). Results Results demonstrated that on average CI users and NH non-musicians preferred the sound quality of anechoic versions to more reverberant versions. In comparison, NH musicians could be delineated into those who preferred the sound quality of anechoic pieces and those who preferred pieces with some reverberation. Discussion/Conclusion This is the first study, to our knowledge, to objectively compare the effects of reverberation on musical sound quality ratings in CI users. These results suggest that musical sound quality for CI users can be improved by non-reverberant listening conditions and musical stimuli in which reverberation is removed.


Journal of the Acoustical Society of America | 2009

Investigation of the just noticeable difference of the clarity index for music, C80.

Meghan J. Ahearn; Matthew J. Schaeffler; Robert D. Celmer; Michelle C. Vigeant

The just noticeable difference (JND) of the clarity index for music, C80, has been reported to be approximately 1 dB, but there is limited research to support this value. A subjective study was conducted to verify this JND using a total of 51 musically trained subjects. Test signals were created using digital delays, equalizers, and reverberation‐units, and sent out to eight loudspeakers distributed throughout an anechoic chamber. Three motifs and two C80 base‐cases were tested: (1) had a C80 of −1 dB (1 kHz) with a 2.1‐s reverberation time (RT), while (2) had a C80 of +3 dB (1 kHz) with a 1.6‐s RT. Signals were presented in pairs with the first signal being the base‐case and the second having a positive difference ranging between 0.5 and 3.0 dB. Control cases with no C80 differences were also presented. Results from all 51 subjects did not reveal a clear relationship between the percentages who heard a difference versus the difference in decibels. However, when the data were filtered to include 17 of the...


Journal of the Acoustical Society of America | 2004

Differences in directional sound source behavior and perception between assorted computer room models

Michelle C. Vigeant; Lily M. Wang; Jens Holger Rindel

Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels is increased [J. H. Rindel, F. Otondo, and C. L. Christensen, Proceedings of the International Symposium on Room Acoustics: Design and Science 2004, Paper V01 (2004)]. Further studies evaluating the quality of auralizations using one‐channel, four‐channel, and 13‐channel anechoic recordings have been pursued. The effect of changing the room’s material properties was studied in relation to turning the source around 180 deg and on the range of acoustic parameters from the four‐ and 13 beams. As the room becomes increasingly diffuse, the importance of the modeled directivity decreases when considering reverberation time. However, for the three other parameters evaluated (sound‐pressure level, clarity index, and lateral fra...


Journal of the Acoustical Society of America | 2015

The effects of different test methods on the just noticeable difference of clarity index for musica)

Michelle C. Vigeant; Robert D. Celmer; Chris M. Jasinski; Meghan J. Ahearn; Matthew J. Schaeffler; Clothilde Giacomoni; Adam P. Wells; Caitlin I. Ormsbee

The just noticeable differences (JNDs) of room acoustics metrics are necessary for research and design of performing arts venues. The goal of this work was to evaluate the effects of different testing methods on the measured JND of clarity index for music (C80). An initial study was conducted to verify the findings of other published works that the C80 JND is approximately 1 dB, as currently listed in ISO 3382:2009 (International Organization for Standardization, Switzerland, 2009), however, the results suggested a higher value. In the second study, the effects of using two variations of the method of constant stimuli were examined, where one variation required the subjects to evaluate the pair of signals by listening to each of them in their entirety, while the second approach allowed the participants to switch back and forth in real-time. More consistent results were obtained with the latter variation and the results indicated a C80 JND greater than 1 dB. In the final study, an extensive training period using the first variation was required, based on the second study, and the data were collected using the second variation. The analysis revealed that for the conditions used in this study (concert hall and chamber music hall) that the C80 JND is approximately 3 dB.


Journal of the Acoustical Society of America | 2014

Gaps in the literature on the effects of aircraft noise on children’s cognitive performance

Matthew Kamrath; Michelle C. Vigeant

In the past two decades, several major studies have indicated that chronic aircraft noise exposure negatively impacts children’s cognitive performance. For example, the longitudinal Munich airport study (Hygge, Am. Psychol. Soc., 2002) demonstrated that noise adversely affects reading ability, memory, attention, and speech perception. Moreover, the cross-sectional RANCH study (Stansfeld, Lancet, 2005) found a linear correlation between extended noise exposure and reduced reading comprehension and recognition memory. This presentation summarizes these and other recent studies and discusses four key areas in need of further research (ENNAH Final Report Project No. 226442, 2013). First, future studies should account for all of the following confounding factors: socioeconomic variables, daytime and nighttime aircraft, road, and train noise, and air pollution. Second, multiple noise metrics should be evaluated to determine if the character of the noise alters the relationship between noise and cognition. Third...


Journal of the Acoustical Society of America | 2011

Study of the effects of different endpin materials on cello sound characteristics

Clinton R. Fleming; Cassey R. Stypowany; Robert D. Celmer; Michelle C. Vigeant

The purpose of this study was to investigate the effects of changing a cellos endpin material and boundary conditions on the sound and vibration characteristics of a cello. It was hypothesized that an endpin made of a denser material than stainless steel, which is traditionally used, would improve the tone quality of the cello. In terms of endpin boundary conditions, it was hypothesized that using a shorter endpin with fixed end conditions might also improve the vibration characteristics and sound radiation efficiency of the cello. Objective and subjective tests were conducted to examine the effects of the different endpin materials. Sound power level output and vibration measurements of a cellist playing on different endpins were obtained following ISO 3741. In general, sound power levels and measured vibrations were consistent for all endpins for all notes tested. For the subjective study, volunteer cellists played selected excerpts with the different endpins, not knowing which endpin they were using. ...


Journal of the Acoustical Society of America | 2010

Investigation of the subjective impression of listener envelopment with both binaural recordings and auralizations.

Michelle C. Vigeant; Robert D. Celmer; Madison D. Ford; Carl K. Vogel

Listener envelopment (LEV) is the sense of being fully immersed in a sound field and can be used to compare the listening experience in different concert halls. LEV has been shown to correlate with the objective parameter late lateral sound level (GLL) through the use of simulated sound fields generated with delays and reverberators. The primary purpose of this study was to investigate this correlation using both binaural recordings made in a 900‐seat hall and auralizations made in an ODEON v9.20 model with both measured and predicted GLL values. In addition, the ratings of the actual recordings and simulations were compared to determine equivalency. A subjective study was carried out using 35 musically trained test participants who rated 24 stimuli, which varied as a function of both receiver position and hall setting. The ratings of the binaural recordings were found to have a linear correlation with both the measured and simulated GLL values, while the ratings of the auralizations were not found to hav...


Journal of the Acoustical Society of America | 2018

The room impulse response in time, frequency, and space: Mapping spatial energy using spherical array beamforming techniques

Matthew T. Neal; Michelle C. Vigeant

The auditory perception of rooms is a multi-dimensional problem. Our hearing system interprets time, frequency, and spatial information from arriving room reflections, but traditionally, only the time and frequency domains are considered in room acoustic metric and objective sound field analyses. This work aims to develop spatial visualizations of the energy in a room impulse response (RIR). With a spherical microphone array, a room’s energy can be mapped in full three-dimensions. First, beamforming techniques are used to generate a set of directional RIRs from the spherical microphone array measurement. This set of directional RIRs is analogous to using a microphone with a directional beam pattern response, oriented individually at all points around a sphere. Then, these directional or beam RIRs are time windowed and band-pass filtered to create spatial energy maps of the room. Comparisons between a plane-wave beam pattern and a Dolph-Chebyshev beam pattern will be demonstrated in the context of RIR beamforming. As well, different strategies for normalizing peak energy amplitudes to either the direct sound or a spherical spreading condition will be compared. With these considerations, final results of these spatial energy visualizations and directional RIR animations will be demonstrated. [Work supported by NSF Award 1302741.]The auditory perception of rooms is a multi-dimensional problem. Our hearing system interprets time, frequency, and spatial information from arriving room reflections, but traditionally, only the time and frequency domains are considered in room acoustic metric and objective sound field analyses. This work aims to develop spatial visualizations of the energy in a room impulse response (RIR). With a spherical microphone array, a room’s energy can be mapped in full three-dimensions. First, beamforming techniques are used to generate a set of directional RIRs from the spherical microphone array measurement. This set of directional RIRs is analogous to using a microphone with a directional beam pattern response, oriented individually at all points around a sphere. Then, these directional or beam RIRs are time windowed and band-pass filtered to create spatial energy maps of the room. Comparisons between a plane-wave beam pattern and a Dolph-Chebyshev beam pattern will be demonstrated in the context of RIR beam...


Journal of the Acoustical Society of America | 2018

Creation and characterization of an emotional speech database

Peter M. Moriarty; Michelle C. Vigeant; Rachel Wolf; Rick O. Gilmore; Pamela M. Cole

Paralinguistic features of speech communicate emotion in the human voice. In addition to semantic content, speakers imbue their messages with prosodic features comprised of acoustic variations that listeners decode to extract meaning. Psychological science refers to these acoustic variations as affective prosody. Most studies of affective prosody obscure semantic content, although the resulting stimuli are less representative of naturally occurring emotional speech. The presented works details the creation of a naturalistic emotional speech database on which both acoustical analysis and a listening study were conducted. To this end, 55 adults were recorded speaking the same semantic content in happy, angry, and sad voices. Based on prior acoustic analyses of affective prosody, classic parameters were extracted including pitch, loudness, timing, as well as other low-level descriptors, and compared the acoustic features of each emotion with published evidence and theory. Preliminary results indicate that this naturalistic speech samples yielded acoustic features that are congruent with prior experimental stimuli of anger and happiness, but was less consistent with sadness. The results of the listening study indicated that listeners discriminated the intended emotions with 92% accuracy. The dataset therefore yielded a database of emotionally salient acoustical information for further analyses. [Work supported by NIH-R21-104547.]

Collaboration


Dive into the Michelle C. Vigeant's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Dick

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Matthew T. Neal

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jens Holger Rindel

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Martin S. Lawless

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pamela M. Cole

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Peter M. Moriarty

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Rick O. Gilmore

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge