Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yun-Maw Cheng is active.

Publication


Featured researches published by Yun-Maw Cheng.


australasian computer-human interaction conference | 2008

Investigation into the feasibility of using tactons to provide navigation cues in pedestrian situations

Ming-Wei Lin; Yun-Maw Cheng; Wai Yu; Frode Eika Sandnes

Current navigation services do not meet the needs of pedestrians. The displays are often inappropriate. In this paper, we report two experiments to investigate whether using tactile display to present navigation information is sufficient and appropriate in pedestrian situation. The result of those experiments showed that Tactons could be a successful means of communicating navigation information in user interfaces in pedestrian situations.


nordic conference on human-computer interaction | 2008

Using tactons to provide navigation cues in pedestrian situations

Ming-Wei Lin; Yun-Maw Cheng; Wai Yu

Until recently, the existing navigation services do not meet the needs in pedestrian situation. The display of present navigation information is often inappropriate. In this paper, we report two experiments to investigate whether using tactile display to present navigation information is sufficient and appropriate in pedestrian situation. The result of those experiments showed that Tactons could be a successful means of communicating navigation information in user interfaces in pedestrian situations.


international conference on intelligent computing | 2007

A comparative study of different weighting schemes on KNN-based emotion recognition in mandarin speech

Tsang-Long Pao; Yu-Te Chen; Jun-Heng Yeh; Yun-Maw Cheng; Yu-Yuan Lin

Emotion is fundamental to human experience influencing cognition, perception and everyday tasks such as learning, communication and even rational decision-making. This aspect must be considered in human-computer interaction. In this paper, we compare four different weighting functions in weighted KNN-based classifiers to recognize five emotions, including anger, happiness, sadness, neutral and boredom, from Mandarin emotional speech. The classifiers studied include weighted KNN, weighted CAP, and weighted DKNN. To give a baseline performance measure, we also adopt traditional KNN classifier. The experimental results show that the used Fibonacci weighting function outperforms than others in all weighted classifiers. The highest accuracy achieves 81.4% with weighted D-KNN classifier.


international conference on human-computer interaction | 2009

A Study on the Design of Augmented Reality User Interfaces for Mobile Learning Systems in Heritage Temples

Kuo-Hsiung Wang; Li-Chieh Chen; Po-Ying Chu; Yun-Maw Cheng

In order to reduce switching attention and increase the performance and pleasure of mobile learning in heritage temples, the objective of this research was to employ the technology of Augmented Reality (AR) on the user interfaces of mobile devices. Based on field study and literature review, three user interface prototypes were constructed. They both offered two service modes but differed in the location of navigation bars and text display approaches. The results of experiment showed that users preferred animated and interactive virtual objects or characters with sound effects. In addition, transparent background of images and text message boxes were better. The superimposed information should not cover more than thirty percents of the screen so that users could still see the background clearly.


intelligent information hiding and multimedia signal processing | 2007

Continuous Tracking of User Emotion in Mandarin Emotional Speech

Tsang-Long Pao; Charles S. Chien; Jun-Heng Yeh; Yu-Te Chen; Yun-Maw Cheng

Emotions play a significant role in decision-making, healthy, perception, human interaction and human intelligence. Automatic recognition of emotion in speech is very desirable because it adds to the human- computer interaction and becomes an important research area in the last years. However, to the best of our knowledge, no works have focused on automatic emotion tracking of continuous Mandarin emotional speech. In this paper, we present an emotion tracking system, by dividing the utterance into several independent segments, each of which contains a single emotional category. Experimental results reveal that the proposed system produces satisfactory results. On our testing database composed of 279 utterances which are obtained by concatenating short sentences, the average accuracy achieves 83% by using weighted D-KNN classifier and LPCC and MFCC features.


intelligent information hiding and multimedia signal processing | 2007

Combination of Multiple Classifiers for Improving Emotion Recognition in Mandarin Speech

Tsang-Long Pao; Charles S. Chien; Yu-Te Chen; Jun-Heng Yeh; Yun-Maw Cheng; Wen-Yuan Liao

Automatic emotional speech recognition system can be characterized by the selected features, the investigated emotional categories, the methods to collect speech utterances, the languages, and the type of classifier used in the experiments. Until now, several classifiers are adopted independently and tested on numerous emotional speech corpora but no any classifier is enough to classify the emotional classes optimally. In this paper, we focus on combination schemes of multiple classifiers to achieve best possible recognition rate for the task of 5-classes emotion recognition in Mandarin speech. The investigated classifiers include KNN, WKNN, WCAP, W-DKNN and SVM. The experimental results have shown that classifier combination schemes, including majority voting method, minimum misclassification method and maximum accuracy method, perform better than the single classifiers in terms of overall accuracy with improvements ranging from 0.9%~6.5%.


intelligent information hiding and multimedia signal processing | 2007

Comparison of Several Classifiers for Emotion Recognition from Noisy Mandarin Speech

Tsang-Long Pao; Wen-Yuan Liao; Yu-Te Chen; Jun-Heng Yeh; Yun-Maw Cheng; Charles S. Chien

Automatic recognition of emotions in speech aims at building classifiers for classifying emotions in test emotional speech. This paper presents an emotion recognition system to compare several classifiers from clean and noisy speech. Five emotions, including anger, happiness, sadness, neutral and boredom, from Mandarin emotional speech are investigated. The classifiers studied include KNN WCAP GMM HMM and W-DKNN. Feature selection with KNN was also included to compress acoustic features before classifying the emotional states of clean and noisy speech. Experimental results show that the proposed W-DKNN outperformed at every SNR speech among the three KNN-based classifiers and achieved highest accuracy from clean speech to 20dB noisy speech when compared with all the classifiers.


international conference on human computer interaction | 2011

Finding suitable candidates: the design of a mobile volunteering matching system

Wei-Chia Chen; Yun-Maw Cheng; Frode Eika Sandnes; Chao-Lung Lee

It can be difficult to get started with voluntary work for potential volunteers (PVs). Moreover, it is difficult to find and recruit suitable candidates for nonprofit organizations. To help solve this problem we designed a mobile matching prototype that enables an organization to actively promote ongoing volunteer activities with the need of recruitment through their bubble icons on an instant map. In the other end, PVs can easily get started by monitoring the colors of the icons and tap the ones which match their interests. This allows them to read about developing threads and browse the corresponding activities. The system is evaluated by interviewing two organization managers and three volunteers.


affective computing and intelligent interaction | 2007

Feature Combination for Better Differentiating Anger from Neutral in Mandarin Emotional Speech

Tsang-Long Pao; Yu-Te Chen; Jun-Heng Yeh; Yun-Maw Cheng; Charles S. Chien

Just as written language is a sequence of elementary alphabet, speech is a sequence of elementary acoustic symbols. Speech signals convey more than spoken words. The additional information conveyed in speech includes gender information, age, accent, speakers identity, health, prosody and emotion [1].


australasian computer-human interaction conference | 2008

Surfing in the crowd: feasibility study of experience sharing in a Taiwanese night market

Chao-Lung Lee; Yun-Maw Cheng; Ching-Long Yeh; Li-Chieh Chen; Wai Yu; Kuan-Ta Chen

Social Proximity Applications (SPAs) have prompted a promising opportunity for mobile services that utilize the changes in daily life in the proximity of mobile users. This paper describes our research-in-progress about designing and developing a mobile SPA, which facilitates social interaction among visitors in a night market crowd. This application allows night market visitors to share their experiences in photos with nearby others via their Bluetooth-enabled mobile phones. The design was based on a two-week field observation in an attempt to investigate the motivations and attitudes towards applications of this type. After a three-night extensive trial we found the value of the application - privacy-sensitive, playful, and enjoyable, yields high consistency with results from field observation. The ultimate goal is to identify potential engaging design extensions to the current prototype.

Collaboration


Dive into the Yun-Maw Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frode Eika Sandnes

Oslo and Akershus University College of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge