Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shinae Kang is active.

Publication


Featured researches published by Shinae Kang.


conference of the international speech communication association | 2016

Relationships Between Functional Load and Auditory Confusability Under Different Speech Environments.

Shinae Kang; Clara Cohen

Functional load (FL) is an information-theoretic measure that captures a phoneme’s contribution to successful word identification. Experimental findings have shown that it can help explain patterns in perceptual accuracy. Here, we ask whether the relationship between FL and perception has larger consequences for the structure of a language’s lexicon. Since reducing FL minimizes the risk of misidentifying a word in the case where a listener inaccurately perceives the initial phoneme, we predicted that in spoken language, where perceptual accuracy is important for successful communication, the lexicon will be structured to reduce FL in auditorily confusable initial phonemes more than in written language. To test this prediction, we compared FL of all initial phonemes in spoken and academic written genres of the COCA corpus. We found that FL in phoneme pairs in the spoken corpus is overall higher and more variable than in the academic corpus, a natural consequence of the smaller lexical inventory characteristic of spoken language. In auditorily confusable pairs, however, this difference is relatively reduced, such that spoken FL decreases relative to academic FL. We argue that this reflects a pressure in spoken language to use words for which inaccurate perception does minimal damage to word identification.


Journal of the Acoustical Society of America | 2014

Effects of linguistic structure on perceptual attention given to different speech units

Shinae Kang; Keith Johnson

Listeners can shift their attention to different sizes of speech during speech perception. This study extends this claim and investigates if linguistic structure affects this attention. Since English has a larger syllable inventory than Korean and Japanese, each phoneme plays a larger functional role. Also, listeners have different levels of phonological awareness due to the differences in the orthography. We focus on the effect of perceptual attention on the perceptibility of intervocalic consonant clusters (VCCV) and whether it varies cross-linguistically by these structural factors. We first recorded eight talkers saying VC- and CV-syllables and spliced the syllables to create non-overlapping VC.CV-stimuli. Listeners in three language groups (English/Korean/Japanese) participated in a 9-Alternative-Forced-Choice perception task. They identified the CC as one of 9 alternatives (“pt”, “pk”, “pp”, etc.) and in an attention-manipulated condition did the same task while also monitoring for target talkers. T...


Journal of the Acoustical Society of America | 2015

Does knowledge of probabilistic pronunciation patterns aid perception

Shinae Kang; Clara Cohen; Rozina Fonyo

Words which are probable in their morphological paradigms tend to have lengthened affixes (Cohen, 2014; Kuperman et al., 2007). Here, we ask whether listeners use the pattern to aid perception. If so, paradigmatically probable words with lengthened affixes should be perceived more quickly than similarly lengthened improbable words. In two experiments (Experiment 1: phoneme monitoring; Experiment 2: lexical decision), we measured listeners’ reaction time (RT) to 50 English verbs with [-s] suffixes (e.g., looks, breaks). Each suffix was adjusted in duration to either a normalized proportion of stem duration, shortened by 25% of the normalized duration, or lengthened by 25%. In Experiment 2, paradigmatic probability did not affect RT, but short words had slower RTs than normalized or lengthened words, suggesting that generally reduced suffix duration impedes perception. In Experiment 1, RT decreased with increased probability for the long condition, but not for the normalized condition. This suggests that a ...


Journal of the Acoustical Society of America | 2013

Perceptual compensation with familiar and unfamiliar rounding coarticulation

Keith Johnson; Shinae Kang; Emily Cibelli

We compared the integration of three kinds of contextual information in the perception of the fricatives [s] and [∫]. We asked American English listeners to identify sounds on an [s] to [∫] continuum and manipulated (1) the vowel context of the fricative ([Ce], [Co], [Cœ]), (2) the original fricative of the CV ([s] vs [∫]), and (3) the modality of the stimulus (audio-only, AV). There was a large compensation for coarticulation effect on perception—subjects responded with “s” more often when the following vowel was round. Interestingly, and perhaps significantly, perceptual compensation was not as great with the less familiar vowel [œ] even when listeners saw the face. Measurements of lip rounding in these stimuli show that [o] and [œ] have about the same degree and type of rounding over the CV. In a second experiment, we measured reaction time to audio-visual mismatches in these stimuli (again in a fricative identification task). We found that mismatches of audio and video consonant information slowed rea...


Journal of the Acoustical Society of America | 2013

Visual cue salience depends on place of articulation and syllabic context

Shinae Kang; Keith Johnson

This study is on audio-visual perceptual intelligibility of consonants in intervocalic clusters (VC1C2V). Previous studies have yielded inconsistent findings on perceptual salience of different stop consonants and very few have tested salience in clusters. Consequently, it has been unclear as to whether greater or less perceptual salience leads to greater degree of place assimilation. In Korean, labials are often produced with more gestural overlap than velars in C1. I tested whether labials are perceptually more or less salient in both audio and audio-visual conditions. VC and CV syllables spoken by both English and Korean speakers were first embedded in noise and spliced together for non-overlapping VCCV sequences. Korean listeners identified the two consonants in either audio or AV presentations. A confusion matrix analysis for each stop consonant shows that in C1 there is asymmetric improvement with the addition of videos for labial consonants only, while in C2 this asymmetry was not found. The result suggests that listeners make differential use of visual cues depending on place of articulation and syllabic context. Also, the result supports the talker enhancement view of sound change, which assumes that talkers are aware of perceptual salience and enhance (with less gestural overlap) the weak contrast.


Journal of the Acoustical Society of America | 2012

Audio/visual compensation for coarticulation

Shinae Kang; Greg Finley; Keith Johnson; Carson Miller Rigoli

This study investigates how visual phonetic information affects compensation for coarticulation in speech perception. A series of CV syllables with fricative continuum from [s] to [sh] before [a],[u] and [y] was overlaid with a video of a face saying [s]V, [∫]V, or a visual blend of the two fricatives. We made separate movies for each vowel environment. We collected [s]/[∫] boundary locations from 24 native English speakers. In a test of audio-visual integration, [∫] videos showed significantly lower boundary locations (more [sh] responses) than [s] videos (t[23]=2.9, p 3, p<0.01), but not with the unfamiliar vowel [y]. This pattern of results was similar to our findings from an audio-only version of the experiment, implying that the compensation effect was not strengthened by seeing the lip rounding of [y].


Journal of the Acoustical Society of America | 2012

Language specific compensation for coarticulation

Keith Johnson; Shinae Kang; Greg Finley; Carson Miller Rigoli; Elsa Spinelli

This paper reports an experiment testing whether compensation for coarticulation in speech perception is mediated by linguistic experience. The stimuli are a set of fricative-vowel syllables on continua from [s] to [∫] with the vowels [a], [u], and [y]. Responses from native speakers of English and French (20 in each group) were compared. Native speakers of French are familiar with the production of the rounded vowel [y] while this vowel was unfamiliar to the native English speakers. Both groups showed compensation for coarticulation (both t > 5, p < 0.01) with the vowel [u] (more “s” responses indicating that in the context of a round vowel, fricatives with a lower spectral center of gravity were labeled “s”). The French group also showed a compensation effect in the [y] environment (t[20] = 3.48, p<0.01). English listeners also showed a tendency for more subject-to-subject variation on the [y] boundary locations than did the French listeners (Levenes test of equality of variance, p < 0.1). The results ...


Speech Communication | 2016

Effects of native language on compensation for coarticulation

Shinae Kang; Keith Johnson; Gregory P. Finley


Journal of the Acoustical Society of America | 2016

Relationship between Perceptual Accuracy and Information Measures: A cross-linguistic Study

Shinae Kang


The Mental Lexicon | 2018

Flexible perceptual sensitivity to acoustic and distributional cues

Clara Cohen; Shinae Kang

Collaboration


Dive into the Shinae Kang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clara Cohen

University of California

View shared research outputs
Top Co-Authors

Avatar

Emily Cibelli

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John F. Houde

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge