Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiyoun Choi is active.

Publication


Featured researches published by Jiyoun Choi.


IEEE Transactions on Consumer Electronics | 2012

Universal view synthesis unit for glassless 3DTV

Jungsik Park; Jiyoun Choi; In Ryu; Jong-Il Park

As 3D content is becoming popular, the data size of such content is getting larger, which may cause a critical bandwidth problem in deploying 3D broadcast services. View synthesis using depth maps can play a key role in avoiding the bandwidth problem. In this paper, we propose a universal view synthesis unit (UVSU), which allows depth image-based fast view synthesis by parallel processing using a programmable graphic process unit (GPU). Assuming that a few stereo images and their corresponding disparity maps are given, we synthesize multiple virtual viewpoint images in realtime. Moreover, the proposed UVSU can freely adjust various requirements, such as the number of virtual viewpoints, their positions, and the intervals of each virtual viewpoint depending on 3D display devices. The effectiveness of our approach is verified through many experiments with various real images.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Early phonology revealed by international adoptees' birth language retention

Jiyoun Choi; Mirjam Broersma; Anne Cutler

Significance Dutch adults who, as international adoptees, had heard Korean early in life but had forgotten it learned to identify an unfamiliar three-way Korean consonant distinction significantly faster than controls without such experience. Even adoptees who had been adopted at 3–5 mo of age showed the learning advantage. Thus, early exposure to spoken language, even in the first half-year of life, leaves traces that can facilitate later relearning. Before 6 mo, infants often discriminate foreign-language phonological contrasts better than adults can. This has been widely held to mean that infants younger than 6 mo have no native-language phonological knowledge to capture spoken input. Our findings are significant because they indicate that phonological knowledge is indeed in place before age 6 mo. Until at least 6 mo of age, infants show good discrimination for familiar phonetic contrasts (i.e., those heard in the environmental language) and contrasts that are unfamiliar. Adult-like discrimination (significantly worse for nonnative than for native contrasts) appears only later, by 9–10 mo. This has been interpreted as indicating that infants have no knowledge of phonology until vocabulary development begins, after 6 mo of age. Recently, however, word recognition has been observed before age 6 mo, apparently decoupling the vocabulary and phonology acquisition processes. Here we show that phonological acquisition is also in progress before 6 mo of age. The evidence comes from retention of birth-language knowledge in international adoptees. In the largest ever such study, we recruited 29 adult Dutch speakers who had been adopted from Korea when young and had no conscious knowledge of Korean language at all. Half were adopted at age 3–5 mo (before native-specific discrimination develops) and half at 17 mo or older (after word learning has begun). In a short intensive training program, we observe that adoptees (compared with 29 matched controls) more rapidly learn tripartite Korean consonant distinctions without counterparts in their later-acquired Dutch, suggesting that the adoptees retained phonological knowledge about the Korean distinction. The advantage is equivalent for the younger-adopted and the older-adopted groups, and both groups not only acquire the tripartite distinction for the trained consonants but also generalize it to untrained consonants. Although infants younger than 6 mo can still discriminate unfamiliar phonetic distinctions, this finding indicates that native-language phonological knowledge is nonetheless being acquired at that age.


Frontiers in Psychology | 2016

Effects of the Native Language on the Learning of Fundamental Frequency in Second-Language Speech Segmentation

Annie Tremblay; Mirjam Broersma; Caitlin E. Coughlin; Jiyoun Choi

This study investigates whether the learning of prosodic cues to word boundaries in speech segmentation is more difficult if the native and second/foreign languages (L1 and L2) have similar (though non-identical) prosodies than if they have markedly different prosodies (Prosodic-Learning Interference Hypothesis). It does so by comparing French, Korean, and English listeners’ use of fundamental-frequency (F0) rise as a cue to word-final boundaries in French. F0 rise signals phrase-final boundaries in French and Korean but word-initial boundaries in English. Korean-speaking and English-speaking L2 learners of French, who were matched in their French proficiency and French experience, and native French listeners completed a visual-world eye-tracking experiment in which they recognized words whose final boundary was or was not cued by an increase in F0. The results showed that Korean listeners had greater difficulty using F0 rise as a cue to word-final boundaries in French than French and English listeners. This suggests that L1–L2 prosodic similarity can make the learning of an L2 segmentation cue difficult, in line with the proposed Prosodic-Learning Interference Hypothesis. We consider mechanisms that may underlie this difficulty and discuss the implications of our findings for understanding listeners’ phonological encoding of L2 words.


Journal of the Acoustical Society of America | 2013

Individual differences in learning to perceive novel phonetic contrasts: How stable are they across time and paradigms?

Mirjam Broersma; Dan Dediu; Jiyoun Choi

Previous research has shown that learners differ widely in the success with which they learn to perceive novel phonetic contrasts. Little is known, however, about the stability of such differences over time and over paradigms. Are individuals who are good at learning to perceive novel speech sounds consistently good at it, or does the success of learning fluctuate over time, or with the use of different paradigms? First, we investigate the stability of individual differences over time by assessing performance during five (pre- and post-training) test moments on three separate days with one-week intervals. Second, we investigate the stability over paradigms by comparing the two most commonly used tests of speech sound perception, namely discrimination and identification. 70 native speakers of Dutch participated in a series of training and test sessions, during which they were trained to perceive the Korean three-way lenis-fortis-aspirated contrasts /p-p*-ph/, /t-t*-th/, and /k-k*-kh/, which are difficult f...


international conference on consumer electronics | 2010

Real-time view synthesis system with multi-texture structure of GPU

Jiyoun Choi; Sae-Woon Ryu; Hong-Chang Shin; Jong-Il Park

Glassless 3D display requires multiple images taken from different viewpoints to show a scene. Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is a technique of generating required virtual viewpoint images using a limited number of views and depth maps. In this paper, we propose an algorithm to compute virtual views much faster by using multi-texture image structure of graphics processing unit(GPU). We demonstrate the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.


Frontiers in Psychology | 2016

Phonetic Encoding of Coda Voicing Contrast under Different Focus Conditions in L1 vs. L2 English

Jiyoun Choi; Sahayng Kim; Taehong Cho

This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience—i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers’ native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience—i.e., the Korean speakers’ use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection with the L2 speakers’ native language experience.


Journal of the Acoustical Society of America | 2016

Effects of L1 prosody on segmental contrast in L2: The case of English stop voicing contrast produced by Korean speakers

Jiyoun Choi; Sahyang Kim; Taehong Cho

This study investigated how the L1 phonetics-prosody interface transfers to L2 by examining prosodic strengthening effects (due to prosodic position and focus) on English voicing contrast (bad-pad) as produced by Korean vs English speakers. Under prosodic strengthening, Korean speakers showed a greater F0 difference due to voicing than English speakers, suggesting that their experience with the macroprosodic use of F0 in Korean transfers into L2. Furthermore, Korean speakers produced voiced stops with low F0 and short voice onset time as English speakers did, although such a cue pairing is absent in Korean, showing dissociation of cues from L1 segments for L2 production.


Journal of the Korean society of speech sciences | 2015

Dutch Listeners' Perception of Korean Stop Consonants

Jiyoun Choi

We explored Dutch listeners’ perception of Korean three-way contrast of fortis, lenis, and aspirated stops. The three Korean stops are all voiceless word-initially, whereas Dutch distinguishes between voiced and voiceless stops, so Korean voiceless stops were expected to be difficult for the Dutch listeners. Among the three Korean stops, fortis stops are phonetically most similar to Dutch voiceless stops, thus they were expected to be the easiest to distinguish for the Dutch listeners. Dutch and Korean listeners carried out a discrimination task using three crucial comparisons, i.e., fortis-lenis, fortis-aspirated, and lenis-aspirated stops. Results showed that discrimination between lenis and aspirated stops was the most difficult among the three comparisons for both Dutch and Korean listeners. As expected, Dutch listeners discriminated fortis from the other stops relatively accurately. It seems likely that Dutch listeners relied heavily on VOT but less on F0 when discriminating between the three Korean stops.


Journal of the Acoustical Society of America | 2012

Cross-linguistic emotion recognition: Dutch, Korean, and American English

Jiyoun Choi; Mirjam Broersma; Martijn Goudbeek

This study investigates the occurrence of asymmetries in cross-linguistic recognition of emotion in speech. Theories on emotion recognition do not address asymmetries in the cross-linguistic recognition of emotion. To study perceptual asymmetries, a fully crossed design was used, with speakers and listeners from two typologically unrelated languages, Dutch and Korean. Additionally, listeners of American English, typologically close to Dutch but not Korean, were tested. Eight emotions, balanced in valence (positive-negative), arousal (active-passive), and basic vs. non-basic emotions -properties that are known to affect emotion recognition- were recorded by eight Dutch and eight Korean professional actors, in a nonsense phrase that was phonologically legal in both languages (and English). Stimuli were selected on the basis of prior validation studies with Dutch and Korean listeners. 28 Dutch, 24 Korean, and 26 American participants were presented with all 256 Dutch and Korean stimuli, blocked by language. ...


Royal Society Open Science | 2017

Early development of abstract language knowledge: evidence from perception–production transfer of birth-language memory

Jiyoun Choi; Anne Cutler; Mirjam Broersma

Collaboration


Dive into the Jiyoun Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge