Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bom Jun Kwon is active.

Publication


Featured researches published by Bom Jun Kwon.


Journal of the Acoustical Society of America | 2006

Effect of electrode configuration on psychophysical forward masking in cochlear implant listeners

Bom Jun Kwon; Chris van den Honert

Bipolar stimulation has been thought to be more beneficial than monopolar stimulation for speech coding in cochlear implants, on the basis of its more restricted current flow. The present study examined whether bipolar stimulation would indeed lead to reduced channel interaction in a behavioral forward masking experiment tested in four Nucleus 24 users. The masker was fixed on one channel and three masker levels that were balanced for loudness between the configurations were chosen. As expected, masking was maximal when the masker and probe channels were spatially close and decreased as they were separated. However, overall masking patterns did not consistently demonstrate sharper tuning with bipolar stimulation than monopolar. This implies that the spatial extent of a bipolar current field is not consistently narrower than that of an equally loud monopolar stimulus; therefore, it should not be assumed that bipolar stimulation leads to reduced channel interaction. Notably, bipolar masking patterns appeared to display more variations across channels, possibly influenced more by anatomical and neural irregularities near electrode contacts than monopolar masking patterns. The present psychophysical results provide a theoretical basis regarding the widespread use (and success) of monopolar configurations by implant users.


Journal of the Acoustical Society of America | 2001

Consonant identification under maskers with sinusoidal modulation: masking release or modulation interference?

Bom Jun Kwon; Christopher W. Turner

The present study investigated the effect of envelope modulations in a background masker on consonant recognition by normal hearing listeners. It is well known that listeners understand speech better under a temporally modulated masker than under a steady masker at the same level, due to masking release. The possibility of an opposite phenomenon, modulation interference, whereby speech recognition could be degraded by a modulated masker due to interference with auditory processing of the speech envelope, was hypothesized and tested under various speech and masker conditions. It was of interest whether modulation interference for speech perception, if it were observed, could be predicted by modulation masking, as found in psychoacoustic studies using nonspeech stimuli. Results revealed that masking release measurably occurred under a variety of conditions, especially when the speech signal maintained a high degree of redundancy across several frequency bands. Modulation interference was also clearly observed under several circumstances when the speech signal did not contain a high redundancy. However, the effect of modulation interference did not follow the expected pattern from psychoacoustic modulation masking results. In conclusion, (1) both factors, modulation interference and masking release, should be accounted for whenever a background masker contains temporal fluctuations, and (2) caution needs to be taken when psychoacoustic theory on modulation masking is applied to speech recognition.


Journal of the Acoustical Society of America | 1998

Frequency-weighting functions for broadband speech as estimated by a correlational method

Christopher W. Turner; Bom Jun Kwon; Chiemi Tanaka; Jennifer Knapp; Jodi L. Hubbartt; Karen A. Doherty

The relative contributions of various regions of the frequency spectrum to speech recognition were assessed with a correlational method [K. A. Doherty and C. W. Turner, J. Acoust. Soc. Am. 100, 3769-3773 (1996)]. The speech materials employed were the 258-item set of the Nonsense Syllable Test. The speech was filtered into four frequency bands and a random level of noise was added to each band on each trial. A point biserial correlation was computed between the signal-to-noise ratio in each band on the trials and the listeners responses, and these correlations were then taken as estimates of the relative weights for each frequency band. When the four bands were presented separately, the correlations for each band were approximately equal; however, when the four bands were presented in combination, the correlations were quite different from one another, implying that in the broadband case listeners relied much more on some bands than on others. It is hypothesized that these differences reflect the way in which listeners combine and attend to speech information across various frequency regions. The frequency-weighting functions as determined by this method were highly similar across all subjects, suggesting that normal-hearing listeners use similar frequency-weighting strategies in recognizing speech.


Behavior Research Methods | 2012

AUX: A scripting language for auditory signal processing and software packages for psychoacoustic experiments and education

Bom Jun Kwon

This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community—both those with programming backgrounds and those without.


Journal of the Acoustical Society of America | 2016

Effects of auditory training on cochlear implant users’ gender categorization

Bom Jun Kwon; Qian-Jie Fu

Nowadays, the benefits of cochlear implants (CIs) for individuals with a substantial degree of hearing loss are widely demonstrated in clinical applications in terms of both speech recognition and speech production. However, a body of evidence indicates CI users’ compromised ability to accurately categorize gender of the voice (e.g., Fuller et al., J. Assoc. Res. Otol. 15, 1037–1048, 2014), which may be an obstacle to overcome to improve executive functions for communication with the device. While it has been known that CI users are capable of utilizing the fundamental frequency (F0) of the voice as the primary determinant of gender categorization, poor differential selectivity in F0 with the device attributes to the compromised ability. Another vocal characteristic, vocal tract length (VTL), which is known to play an important role in gender categorization in normal hearing (NH) listeners, is largely ignored in CI users. Considering that the VTL information, grossly reflected on formants, may be transmit...


Journal of the Acoustical Society of America | 2015

Amplitude fluctuations in a masker influence lexical segmentation in cochlear implant usersa)

Trevor T. Perry; Bom Jun Kwon

Normal-hearing listeners show masking release, or better speech understanding in a fluctuating-amplitude masker than in a steady-amplitude masker, but most cochlear implant (CI) users consistently show little or no masking release even in artificial conditions where masking release is highly anticipated. The current study examined the hypothesis that the reduced or absent masking release in CI users is due to disruption of linguistic segmentation cues. Eleven CI subjects completed a sentence keyword identification task in a steady masker and a fluctuating masker with dips timed to increase speech availability. Lexical boundary errors in their responses were categorized as consistent or inconsistent with the use of the metrical segmentation strategy (MSS). Subjects who demonstrated masking release showed greater adherence to the MSS in the fluctuating masker compared to subjects who showed little or no masking release, while both groups used metrical segmentation cues similarly in the steady masker. Based on the characteristics of the segmentation cues, the results are interpreted as evidence that CI listeners showing little or no masking release are not reliably segregating speech from competing sounds, further suggesting that one challenge faced by CI users listening in noisy environments is a reduction of reliable segmentation cues.


Journal of the Acoustical Society of America | 2014

Sound effects with AUditory syntaX—A high-level scripting language for sound processing

Bom Jun Kwon

AUditory syntaX (AUX) is a high-level scripting programming language specifically crafted for the generation and processing of auditory signals (Kwon, 2012; Behav Rev 44, 361–373). AUX does not require knowledge or prior experience in computer programming. Rather, AUX provides an intuitive and descriptive environment where users focus on perceptual components of the sound, without tedious tasks unrelated to the perception such as memory management or array handling often required in other computer languages such as C + + or MATLAB that are popularly used in auditory science. This presentation provides a demonstration of AUX for the generation and processing of various sound effects, particularly “fun” or “spooky” sounds. Processing methods for sound effects widely used in arts, films and other media, such as reverberation, echoes, modulation, pitch shift, and flanger/phaser, will be reviewed and coding in AUX to generate those effects and the generated sounds will be demonstrated.


Journal of the Acoustical Society of America | 2007

Modulation interference in speech recognition by cochlear implant users

Bom Jun Kwon; Peggy B. Nelson

When listening to speech in a background masker, normal‐hearing listeners take advantage of envelope fluctuations or dips in the masker (or the moments that the SNR is instantaneously favorable). Cochlear implant listeners, however, do not exhibit such ability. In a previous study [Nelson et al., J. Acoust. Soc. Am. 113, 961–968 (2003)], speech recognition performance with modulated maskers was similar or slightly worse than with a steady noise. Clinical observation indicates that implant listeners usually have more difficulty understanding speech in modulated backgrounds. In the present study, the recognition of IEEE sentences by Nucleus recipients was measured in a variety of backgrounds. In tightly controlled conditions via direct stimulation, performance is often substantially poorer with modulated backgrounds (the differences in score are as large as 20–30%), strongly indicating that they are subject to modulation interference [B. J. Kwon and C. W. Turner, J. Acoust. Soc. Am. 110, 1130–1140 (2001)]. ...


Journal of the Acoustical Society of America | 2005

Utilizing different channels for multiple inputs in cochlear implant processing

Bom Jun Kwon

While cochlear implants successfully provide auditory sensation for deaf people, speech understanding through the device is compromised when there is a background noise or competing sounds, partly due to implant users’ reduced ability in auditory grouping. The present study investigates whether providing multiple streams of input on different channels would facilitate auditory grouping, thereby assisting speech understanding in competing sounds. In acoustic hearing, presenting two streams of input (such as speech and noise) in spectrally separate channels gener‐ ally facilitates grouping; however, in electric hearing it is difficult to predict and separating them could lead to a negative result, because channel interactions inferred from the excitation patterns are severe and the actual amount of electric current for the noise delivered to the cochlea would be much higher for a given SNR, therefore contaminating the target more effectively. Results from consonant identification measured in a variety of sp...


Journal of the Acoustical Society of America | 2003

Growth of interleaved masking patterns for cochlear implant listeners at different stimulation rates

Bom Jun Kwon; Chris van den Honert; Wendy Parkinson

This study investigates the pattern of growth of masking (GOM) for interleaved masking with Nucleus cochlear implant users. For an interleaved masking paradigm, where the masker and probe overlap in a same time window, the masker may have contrasting effects: it may increase the threshold (as a masker normally does) or decrease it due to a neural summation effect, facilitating detection of the probe. Several stimulation rates and masker levels were tested to examine under what conditions what phenomenon would occur. The results indicated that, in most of the conditions, the amount of masking was positive, i.e., the facilitating effect was not consistently observed. However, the slope of the GOM appears to be dependent upon the stimulation rate: the higher the stimulation rate, the lower the slope, implying that the facilitating effect might be always present and make a bigger impact on overall masking as the stimulation rate becomes high. The amount of masking was also often nonzero (positive) even when t...

Collaboration


Dive into the Bom Jun Kwon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge