Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fei Chen is active.

Publication


Featured researches published by Fei Chen.


signal processing systems | 2016

Context Effect in the Categorical Perception of Mandarin Tones

Fei Chen; Gang Peng

The categorical perception of tones is based not only on word-internal F0 cues but also on external F0 cues in the contexts. The present study focuses on the effects of different types of preceding contexts on Mandarin tone perception. In the experiment, subjects were required to identify a target tone with the preceding context. The target tone was from a tone continuum ranging from Mandarin Tone 1 (high-level tone) to Tone 2 (mid-rising tone). It was preceded by four types of contexts (normal speech, reversal speech, fine-structure sound, and non-speech) with different mean F0 values. Results indicate that the categorical perception of Mandarin tones is influenced only by the normal speech context, and the effect is contrastive. For instance, in a normal speech context with a higher mean F0, the following tone is more likely to be perceived as a lower-frequency tone (Tone 2), whereas with a lower mean F0, the following tone is more likely to be perceived as a higher-frequency tone (Tone 1). These findings suggest that Mandarin tone normalization is mediated by speech-specific processes and that the speech context needs to be intelligible.


international conference on acoustics, speech, and signal processing | 2016

Intelligible enhancement of 3D articulation animation by incorporating airflow information

Fei Chen; Hui Chen; Lan Wang; Ying Zhou; Jiaying He; Nan Yan; Gang Peng

The 3D talking head has been developed fast, in which both external and internal articulators were demonstrated. For Mandarin pronunciation, the aspiration airflow is crucial to discriminate confusable Mandarin consonants. In this paper, we present a 3D talking head system for articulatory and aspiration animation with the use of EMA articulation data and airflow data simultaneously. The quantitative analyses of airflow data indicated confusable Mandarin consonants could be distinguished from each other by the mean airflow during voicing, peak expiratory airflow, and airflow duration. An airflow model was then incorporated into the 3D articulatory model to produce the airflow in accordance with articulator movements of Mandarin pronunciation. An audio-visual test was designed to evaluate the current 3D articulation and aspiration system, where minimal pairs were used to recognize the animation. The identification accuracy was significantly improved from 43.9% without airflow to 84.8% with airflow-incorporated information.


Journal of Child Language | 2017

The development of categorical perception of Mandarin tones in four- to seven-year-old children.

Fei Chen; Gang Peng; Nan Yan; Lan Wang

To track the course of development in childrens fine-grained perception of Mandarin tones, the present study explored how categorical perception (CP) of Mandarin tones emerges along age among 70 four- to seven-year-old children and 16 adults. Prominent discrimination peaks were found for both the child and the adult groups, and they were well aligned with the corresponding identification crossovers. Moreover, six-year-olds showed a much narrower width (i.e. a sharper slope) compared to younger children, and have already acquired adult-like identification competence of Mandarin high-level and mid-rising tones. Although the ability to discriminate within-category tone pairs did not change, the between-category discrimination accuracies were positively correlated with chronological ages among child participants. We assume that the perceptual refinement of Mandarin tones in young children may be driven by an accumulation of perceptual development from the tonal information of the ambient sound input.


IEEE Transactions on Audio, Speech, and Language Processing | 2017

Investigations on Mandarin Aspiratory Animations Using an Airflow Model

Fei Chen; Lan Wang; Hui Chen; Gang Peng

Various three-dimensional (3-D) talking heads have been developed lately for language learning, with both external and internal articulatory movements being visualized to guide learning. Mandarin pronunciation animation is challenging due to its confusable stops and affricates with similar places of articulation. Until now, less attention has been paid to the biosignal information of aspiratory airflow, which is essential in distinguishing Mandarin consonants. This study fills a research gap by presenting the quantitative analyses of airflow, and then designing an airflow model for a 3-D pronunciation system. The airflow information was collected by Phonatory Aerodynamic System, so that confusable consonants in Mandarin could be discerned by mean airflow rate, peak airflow rate, airflow duration, and peak time. Based on the airflow parameters, an airflow model using the physical equation of fluid flow was proposed and solved, which was then combined and synchronized with the existing 3-D articulatory model. Therefore, the new multimodal system was implemented to synchronously exhibit the airflow motions and articulatory movements of uttering Mandarin syllables. Both an audio-visual perception test and a pronunciation training study were conducted to assess the effectiveness of our system. Perceptual results indicated that identification accuracy was improved for both native and nonnative groups with the help of airflow motions, while native perceivers exhibited higher accuracy due to long-term language experience. Moreover, our system helped Japanese learners of Mandarin enhance their production skills of Mandarin aspirated consonants, reflected by higher gain values of voice onset time after training.


workshop on chinese lexical semantics | 2016

The interaction of semantic and orthographic processing during Chinese sinograms recognition An ERP Study

Hao Zhang; Fei Chen; Nan Yan; Lan Wang; If Su; Manwa L. Ng

The present study investigated the interaction of semantic and orthographic processing during compound sinogram recognition, using event related potentials (ERPs) and a picture-word matching task. The behavioral results showed that participants generally needed more time to make a response and were more prone to make mistakes, when the paired mismatch sinogram was orthographically similar or semantically related to the picture’s matching name. The N400 results indicated the main effect of semantics and the significant interaction of semantics by orthography. Moreover, only under the semantically related condition (S+), the mean amplitude of N400 was more negative going in orthographically similar condition (O+) than in orthographically dissimilar one (O-), while there was no significant difference under the semantically unrelated condition (S-). Consequently, the sub-lexical orthographic information plays an important role in discriminating the sinograms sharing related semantics.


international symposium on chinese spoken language processing | 2016

Effects of preceding vocabulary context on the perception of Mandarin vowels

Xunan Huang; Caicai Zhang; Fei Chen; Jonathan Sieg; Lan Wang; Feng Shi

This study compares the perceptual performance of Mandarin basic vowels “e” (/ɤ/) and “u” (/u/) in different contexts (independent & contextual). Results indicate that perception of the target vowel is influenced by the adjacent vowel context in a contrastive manner in both identification and discrimination tests. Moreover, in a context of higher F1 and F2, listeners found it more difficult to discriminate stimuli belonging to the /u/ category (which has lower F1 and F2), which may result from the effect of the referential formants of the context. Despite the influence of contextual factors, both /ɤ/ and /u/ in Mandarin showed relatively stable perception categories, and the perceived psychological parameters were consistent with the measured acoustic values Wu Zongji (1964) found for the Mandarin vowels /ɤ/ and /u/.


international symposium on chinese spoken language processing | 2016

Evaluation of a multimodal 3-D pronunciation tutor for learning Mandarin as a second language: An eye-tracking study

Ying Zhou; Fei Chen; Hui Chen; Lan Wang; Nan Yan

Recently, various 3-D talking heads have been applied to computer-aided language learning as a novel mode. However, there is a lack of objective evaluation of learners perception of the talking head when learning Mandarin as a second language. This study used eye-tracking methodology to evaluate a multimodal 3-D Mandarin pronunciation tutor, in comparison with real human instructor. The pronunciation tutors were presented under two conditions: human face video (HF) and a multimodal 3-D talking head (3-D), each of which has been shown with a front view and a profile view respectively. Results indicated that foreign learners have showed more preference to the 3-D tutor with a shorter entry time. Moreover, learners observed the lip movement of 3-D tutor for a longer time with a front view, and the multimodal 3-D pronunciation tutor exhibited greater advantage of effectively delivering both articulator movements and airflow information with a transparent profile view. In conclusion, all of these findings proved the multimodal 3-D pronunciation tutor has attracted more attentions than human face during the process of learning.


international symposium on chinese spoken language processing | 2016

The effects of tone categories on the perception of Mandarin vowels

Hao Zhang; Fei Chen; Nan Yan; Lan Wang; Yu Chen; Feng Shi

In order to explore the effects of tone categories on the perception of Mandarin vowels, the present study investigated the perceptual performance of vowel continua, which contained three Mandarin vowels /a, ɤ, u/ under four different tone conditions (i.e. high-level, mid-rising, falling-rising, and high-falling tones). The results showed that there was a shift in the categorical boundaries of /a/-/ɤ/ among the four different tone conditions. More specifically, participants generally tended to label the stimuli less as /a/ under the high-falling tone compared with the other three tone conditions. Moreover, the maximum identification rate of /ɤ/ was much lower under the falling-rising tone in contrast with other tone conditions. These findings suggest that vowel perceptions may be strongly influenced by the pitch properties of different tone categories.


conference of the international speech communication association | 2016

The Influence of Language Experience on the Categorical Perception of Vowels: Evidence from Mandarin and Korean.

Hao Zhang; Fei Chen; Nan Yan; Lan Wang; Feng Shi; Manwa L. Ng

Previous research on categorical perception of speech sounds has demonstrated a strong influence of language experience on the categorical perception of consonants and lexical tones. In order to explore the influence of language experience on vowel perception, the present study investigated the perceptual performance for Mandarin and Korean listeners along a vowel continuum, which spanned three vowel categories /a/, /ɜ/, and /u/. The results showed that both language groups exhibited categorical features in vowel perception, with a sharper categorical boundary of /ɜ/-/u/ than that of /a/-/ɜ/. Moreover, the differences found between the two groups revealed that the Korean listeners’ perception tended to be more categorical along the /a/-/ɜ/-/u/ vowel continuum than that of the Mandarin listeners. Furthermore, the Mandarin listeners tended to label stimuli more often as /a/ and less often as /u/ than the Korean counterparts. These perceptual differences between the Mandarin and Korean groups might be attributed to the different acoustic distribution in the F1×F2 vowel space of the two different native languages.


conference of the international speech communication association | 2016

Impaired categorical perception of Mandarin tones and its relationship to language ability in autism spectrum disorders

Fei Chen; Nan Yan; Xiaojie Pan; Feng Yang; Zhuanzhuan Ji; Lan Wang; Gang Peng

17th Annual Conference of the International Speech Communication Association, INTERSPEECH 2016, San Francisco, US, 8-16 September 2016

Collaboration


Dive into the Fei Chen's collaboration.

Top Co-Authors

Avatar

Lan Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Nan Yan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Gang Peng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hui Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ying Zhou

Wuhan University of Technology

View shared research outputs
Top Co-Authors

Avatar

Manwa L. Ng

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kunyu Xu

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge