Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yeou Jiunn Chen is active.

Publication


Featured researches published by Yeou Jiunn Chen.


Journal of The Chinese Institute of Engineers | 2014

A novel speech enhancement using forward–backward minima-controlled recursive averaging

Yeou Jiunn Chen; Jiunn Liang Wu

Nonstationary noises may seriously damage speech intelligibility and reduce the performance of hearing aids. This paper proposes a novel speech enhancement approach, named forward–backward minima-controlled recursive averaging, that helps to solve this problem. A forward–backward estimation is developed to correctively track nonstationary noises, especially the beginnings of nonstationary noises. To suppress these nonstationary noises, a log-spectral amplitude estimator was selected as the gain function for our approach and used to effectively estimate the power spectrum of the noise. Moreover, harmonic regeneration was integrated to reduce the overestimation of noises generated by the forward–backward estimation. The experimental results demonstrated that our approach is able to improve performance compared with previous approaches using both objective and subjective measures.


international conference on asian language processing | 2009

An Articulation Training System with Intelligent Interface and Multimode Feedbacks to Articulation Disorders

Yeou Jiunn Chen; Jiunn Liang Wu; Hui Mei Yang; Chung-Hsien Wu; Chih Chang Chen; Shan Shan Ju

Articulation training with many kinds of stimulus and messages such as visual, voice, and articulatory information can teach user to pronounce correctly and improve user’s articulatory ability. In this paper, an articulation training system with intelligent interface and multimode feedbacks is proposed to improve the performance of articulation training. Clinical knowledge of speech evaluation is used to design the dependent network. Then, automatic speech recognition with dependent network is applied to identify the pronunciation errors. Besides, hierarchical Bayesian network is proposed to recognize user’s emotion from speeches. With the information of pronunciation errors and user’s emotional state, the articulation training sentences can be dynamically selected. Finally, a 3D facial animation is provided to teach users to pronounce a sentence by using speech, lip motion, and tongue motion. Experimental results reveal the usefulness of proposed method and system.


ieee symposium series on computational intelligence | 2013

Forward-backward minima controlled recursive averaging to speech enhancement

Yeou Jiunn Chen; Jiunn Liang Wu

The performance of hearing aids would be seriously reduced by nonstationary noises. In this paper, forward-backward minima controlled recursive averaging is proposed to improve the speech intelligibility by reducing the components of nonstationary noises. A forward-backward estimation is developed to correctively track nonstationary noises, especially the beginning of nonstationary noises. To suppress these nonstationary noises, a log-spectral amplitude estimator was selected as the gain function and used to effectively estimate the power spectrum of the noises. Moreover, harmonic regeneration was integrated to reduce the overestimation of noises. The experimental results demonstrated that our approach is able to improve performance compared to previous approaches.


International Journal of Audiology | 2018

Tone production and perception and intelligibility of produced speech in Mandarin-speaking cochlear implanted children

Yi Lu Li; Yi Hui Lin; Hui Mei Yang; Yeou Jiunn Chen; Jiunn Liang Wu

Abstract Objective: This study explored tone production, tone perception and intelligibility of produced speech in Mandarin-speaking prelingually deaf children with at least 5 years of cochlear implant (CI) experience. Another focus was on the predictive value of tone perception and tone production as they relate to speech intelligibility. Design: Cross-sectional research. Study sample: Thirty-three prelingually deafened children aged over eight years with over five years of experience with CI underwent tests for tone perception, tone production, and the Speech Intelligibility Rating (SIR). A Pearson correlation and a stepwise regression analysis were used to estimate the correlations among tone perception, tone production, and SIR scores. Results: The mean scores for tone perception, tone production, and SIR were 76.88%, 90.08%, and 4.08, respectively. Moderately positive Pearson correlations were found between tone perception and production, tone production and SIR, and tone perception and SIR (pu2009<u20090.01, pu2009<u20090.01 and pu2009<u20090.01, respectively). In the stepwise regression analysis, tone production, as the major predictor, accounted for 29% of the variations in the SIR (pu2009<u20090.01). Conclusions: Mandarin-speaking cochlear-implanted children with sufficient duration of CI use produce intelligent speech. Speech intelligibility can be predicted by tone production performance.


Engineering Computations | 2016

A computer-aided articulation learning system for subjects with articulation disorders

Yeou Jiunn Chen; Jiunn Liang Wu

Purpose n n n n nArticulation errors substantially reduce speech intelligibility and the ease of spoken communication. Moreover, the articulation learning process that speech-language pathologists must provide is time consuming and expensive. The purpose of this paper, to facilitate the articulation learning process, is to develop a computer-aided articulation learning system to help subjects with articulation disorders. n n n n nDesign/methodology/approach n n n n nFacial animations, including lip and tongue animations, are used to convey the manner and place of articulation to the subject. This process improves the effectiveness of articulation learning. An interactive learning system is implemented through pronunciation confusion networks (PCNs) and automatic speech recognition (ASR), which are applied to identify mispronunciations. n n n n nFindings n n n n nSpeech and facial animations are effective for assisting subjects in imitating sounds and developing articulatory ability. PCNs and ASR can be used to automatically identify mispronunciations. n n n n nResearch limitations/implications n n n n nFuture research will evaluate the clinical performance of this approach to articulation learning. n n n n nPractical implications n n n n nThe experimental results of this study indicate that it is feasible for clinically implementing a computer-aided articulation learning system in learning articulation. n n n n nOriginality/value n n n n nThis study developed a computer-aided articulation learning system to facilitate improving speech production ability in subjects with articulation disorders.


international conference on acoustics, speech, and signal processing | 2007

Development of Articulation Assessment and Training System with Speech Recognition and Articulation Training Strategies Selection

Yeou Jiunn Chen; Jing Wei Huang; Hui Mei Yang; Yi Hui Lin; Jiunn Liang Wu

Articulation problems seriously reduce speech intelligent and speech communication and affect persons interpersonal communication, personality, social adaptive capacity, and learning ability. In the clinical protocol, language therapist utilizes clinical experience to individualized assessment, treatment, and training. However, the manpower of language therapists and the assistant instruments are insufficient. In this paper, an articulation assessment and training system is proposed to assist language therapists and articulation disorders. The articulation errors in phonetic are analyzed and modeled by clinical linguist. Using clinical experience of language therapists, articulation training strategies for each type of articulation errors are designed. The articulation characteristics of user can be effectively detected. Speechreading information is responded to improve the performance of training program. The articulation training strategy is automatically selected to suggest articulation disorder in language training activities. Experimental results reveal the practicability of proposed method and system.


ieee symposium series on computational intelligence | 2013

Handheld device based personal auditory training system to hearing loss

Yeou Jiunn Chen; Chia Jui Chang; Jiunn Liang Wu; Yi Hui Lin; Hui Mei Yang

The assistive hearing devices are the only aids to help subjects with hearing loss to use their residual hearing. However, the performance of those devices is closely dependent on auditory training. To develop handheld devices based personal auditory training system with perceptional discrimination analysis and automatic test item generation is very helpful for subjects with hearing loss. Besides, it would ease the burden of speech-language pathologists in developing a personal auditory training. In this study, the mel-frequency cepstrum coefficients and automatic speech recognition are applied to objectively estimate the phonemic confusions. For reducing computational complex, multidimensional scaling is then used to transfer the phonemic confusions into a Euclidean space. Thus, a suitable training material could be automatically generated by simple random process. Finally, the Android based mobile phones are selected as a platform for auditory training. It is convenient for subjects to use the auditory training system. The experimental results show that the average score of mean opinion score is 3.73, which means that the system is very useful.


international conference on biomedical engineering | 2010

Data Mining for Automatic Communication Behaviors Identification

Yeou Jiunn Chen; Jiunn Liang Wu; Hung-Hsien Yang

Children’s interpersonal communication, personality, social adaptive capability, and learning ability are affected by the interaction with their caregivers. Therefore, it is important to improve the communication ability of caregiver and it can also improve children’s language ability and communication behavior. We have to find the communication behavior of child and his/her caregiver from interactive actions and used this information to provide useful information to improve their communication ability. However, the communication behavior should be manually identified and it is a time-consuming process. In this paper, we proposed a data mining algorithm to find the communication behavior. First, child and caregiver have to play three games and the interactive actions are also recorded in video and speech. Those video and speech can help clinical linguists to evaluate the performance of communication behavior. Second, those interactive actions are processed by speech recognition, speech act annotation, and interchange annotation. The speech act and interchange are the most important factors of communication behavior. Third, data mining is proposed to find the most frequent sequences of speech act and interchange. This is the basis information of user’s communication behavior. Finally, clinical linguist can easily evaluate the performance of caregiver and provide suggestions to improve their communication ability. Experimental results reveal the usefulness of the proposed method and system.


international symposium on neural networks | 2008

Automatic speech recognition and dependency network to identification of articulation error patterns

Yeou Jiunn Chen; Jiunn Liang Wu; Hui Mei Yang

Articulation errors will seriously reduce speech intelligibility and the ease of spoken communication. Typically, a language therapist uses his or her clinical experience to identify articulation error patterns, a time-consuming and expensive process. This paper presents a novel automatic approach to identifying articulation error patterns and providing error information of pronunciation to assist the linguistic therapist. A photo naming task is used to capture examples of an individualpsilas articulation patterns. The collected speech is automatically segmented and labeled by a speech recognizer. The recognizerpsilas pronunciation confusion network is adapted to improve the accuracy of the speech recognizer. The modified dependency network and a multiattribute decision model are applied to identify articulation error patterns. Experimental results reveal the usefulness of the proposed method and system.


international conference on biomedical engineering | 2008

Development of Website for the Operating Procedure of Software Contained in Medical Devices

Yeou Jiunn Chen; A. T. Liu; Pei Jarn Chen; Yen-Ting Chen; U. Z. Hsieh; Kuo Sheng Cheng

Fault or failure of software contained in medical devices will seriously endanger users and should be considered to reduce risk. In software development process, the risk analysis and risk control are very important for software contained in medical devices. In order to promote the quality of software contained in medical devices, a website is designed to provide the operating procedure of software contained in medical devices in this paper. The configuration management of software with operation procedure is managed by website based relational database developed by PHP and MySQL. Software process model, V-mode, is applied to guide software development process and used to promote the reliability of software. Moreover, fault tree analysis and failure mode and effects analysis are used to achieve risk analysis and risk control. The risk of software contained in medical devices can be effectively reduced. Therefore, software verification and validation is also integrated to verify and validate the function of software. Integrated Cephalometric Analysis System had been used to show that proposed approach is feasible.

Collaboration


Dive into the Yeou Jiunn Chen's collaboration.

Top Co-Authors

Avatar

Jiunn Liang Wu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Hui Mei Yang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yi Hui Lin

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Chung-Hsien Wu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Hung-Hsien Yang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

A. T. Liu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chia Jui Chang

Southern Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Fu-Chih Liao

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Hui-Mei Yang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Jing Wei Huang

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge