Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heng-Tze Cheng is active.

Publication


Featured researches published by Heng-Tze Cheng.


mobile computing, applications, and services | 2010

Activity-Aware Mental Stress Detection Using Physiological Sensors

Feng-Tso Sun; Cynthia Kuo; Heng-Tze Cheng; Senaka Buthpitiya; Patricia Collins; Martin L. Griss

Continuous stress monitoring may help users better under- stand their stress patterns and provide physicians with more reliable data for interventions. Previously, studies on mental stress detection were lim- ited to a laboratory environment where participants generally rested in a sedentary position. However, it is impractical to exclude the effects of physical activity while developing a pervasive stress monitoring appli- cation for everyday use. The physiological responses caused by mental stress can be masked by variations due to physical activity.


pacific rim conference on multimedia | 2008

Toward Multi-modal Music Emotion Classification

Yi-Hsuan Yang; Yu-Ching Lin; Heng-Tze Cheng; I-Bin Liao; Yeh-Chin Ho; Homer H. Chen

The performance of categorical music emotion classification that divides emotion into classes and uses audio features alone for emotion classification has reached a limit due to the presence of a semantic gap between the object feature level and the human cognitive level of emotion perception. Motivated by the fact that lyrics carry rich semantic information of a song, we propose a multi-modal approach to help improve categorical music emotion classification. By exploiting both the audio features and the lyrics of a song, the proposed approach improves the 4-class emotion classification accuracy from 46.6% to 57.1%. The results also show that the incorporation of lyrics significantly enhances the classification accuracy of valence.


international conference on mobile systems, applications, and services | 2013

NuActiv: recognizing unseen new activities using semantic attribute-based learning

Heng-Tze Cheng; Feng-Tso Sun; Martin L. Griss; Paul C. Davis; Jianguo Li; Di You

We study the problem of how to recognize a new human activity when we have never seen any training example of that activity before. Recognizing human activities is an essential element for user-centric and context-aware applications. Previous studies showed promising results using various machine learning algorithms. However, most existing methods can only recognize the activities that were previously seen in the training data. A previously unseen activity class cannot be recognized if there were no training samples in the dataset. Even if all of the activities can be enumerated in advance, labeled samples are often time consuming and expensive to get, as they require huge effort from human annotators or experts. In this paper, we present NuActiv, an activity recognition system that can recognize a human activity even when there are no training data for that activity class. Firstly, we designed a new representation of activities using semantic attributes, where each attribute is a human readable term that describes a basic element or an inherent characteristic of an activity. Secondly, based on this representation, a two-layer zero-shot learning algorithm is developed for activity recognition. Finally, to reinforce recognition accuracy using minimal user feedback, we developed an active learning algorithm for activity recognition. Our approach is evaluated on two datasets, including a 10-exercise-activity dataset we collected, and a public dataset of 34 daily life activities. Experimental results show that using semantic attribute-based learning, NuActiv can generalize knowledge to recognize unseen new activities. Our approach achieved up to 79% accuracy in unseen activity recognition.


international conference on multimedia and expo | 2008

Automatic chord recognition for music classification and retrieval

Heng-Tze Cheng; Yi-Hsuan Yang; Yu-Ching Lin; I-Bin Liao; Homer H. Chen

As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.


acm multimedia | 2008

Mr. Emo: music retrieval in the emotion plane

Yi-Hsuan Yang; Yu-Ching Lin; Heng-Tze Cheng; Homer H. Chen

This technical demo presents a novel emotion-based music retrieval platform, called Mr. Emo, for organizing and browsing music collections. Unlike conventional approaches which quantize emotions into classes, Mr. Emo defines emotions by two continuous variables arousal and valence and employs regression algorithms to predict them. Associated with arousal and valence values (AV values), each music sample becomes a point in the arousal-valence emotion plane, so a user can easily retrieve music samples of certain emotion(s) by specifying a point or a trajectory in the emotion plane. Being content centric and functionally powerful, such emotion-based retrieval complements traditional keyword- or artist-based retrieval. The demo shows the effectiveness and novelty of music retrieval in the emotion plane.


ieee international conference on pervasive computing and communications | 2014

Nonparametric discovery of human routines from sensor data

Feng-Tso Sun; Yi-Ting Yeh; Heng-Tze Cheng; Cynthia Kuo; Martin L. Griss

People engage in routine behaviors. Automatic routine discovery goes beyond low-level activity recognition such as sitting or standing and analyzes human behaviors at a higher level (e.g., commuting to work). With recent developments in ubiquitous sensor technologies, it becomes easier to acquire a massive amount of sensor data. One main line of research is to mine human routines from sensor data using parametric topic models such as latent Dirichlet allocation. The main shortcoming of parametric models is that it assumes a fixed, pre-specified parameter regardless of the data. Choosing an appropriate parameter usually requires an inefficient trial-and-error model selection process. Furthermore, it is even more difficult to find optimal parameter values in advance for personalized applications. In this paper, we present a novel nonparametric framework for human routine discovery that can infer high-level routines without knowing the number of latent topics beforehand. Our approach is evaluated on public datasets in two routine domains: a 34-daily-activity dataset and a transportation mode dataset. Experimental results show that our nonparametric framework can automatically learn the appropriate model parameters from sensor data without any form of model selection procedure and can outperform traditional parametric approaches for human routine discovery tasks.


international conference on consumer electronics | 2011

Contactless gesture recognition system using proximity sensors

Heng-Tze Cheng; An Mei Chen; Ashu Razdan; Elliot B. Buller

In this paper, we present a novel contactless gesture recognition system using proximity sensors. A set of infrared signal feature extraction methods and a decision-tree-based gesture classifier are proposed. The system allows a user to interact with mobile devices using intuitive gestures, without touching the screen or wearing/holding any additional device. Evaluation results show that the system is low-power, and able to recognize 3D gestures with over 98% precision in real time.


ubiquitous computing | 2013

Towards zero-shot learning for human activity recognition using semantic attribute sequence model

Heng-Tze Cheng; Martin L. Griss; Paul C. Davis; Jianguo Li; Di You

Understanding human activities is important for user-centric and context-aware applications. Previous studies showed promising results using various machine learning algorithms. However, most existing methods can only recognize the activities that were previously seen in the training data. In this paper, we present a new zero-shot learning framework for human activity recognition that can recognize an unseen new activity even when there are no training samples of that activity in the dataset. We propose a semantic attribute sequence model that takes into account both the hierarchical and sequential nature of activity data. Evaluation on datasets in two activity domains show that the proposed zero-shot learning approach achieves 70-75% precision and recall recognizing unseen new activities, and outperforms supervised learning with limited labeled data for the new classes.


pervasive computing and communications | 2011

Imirok: Real-time imitative robotic arm control for home robot applications

Heng-Tze Cheng; Zheng Sun; Pei Zhang

Training home robots to behave like human can help people with their daily chores and repetitive tasks. In this paper, we present Imirok, a system to remotely control robotic arms by user motion using low-cost, off-the-shelf mobile devices and webcam. The motion tracking algorithm detects user motion in real time, without classifier training or predefined action sets. Experimental results show that the system achieves 90% precision and recall rate on motion detection with blank background, and is robust under the change of cluttered background and user-to-camera distance.


mobile computing, applications, and services | 2010

SensOrchestra: Collaborative Sensing for Symbolic Location Recognition

Heng-Tze Cheng; Feng-Tso Sun; Senaka Buthpitiya; Martin L. Griss

Symbolic location of a user, like a store name in a mall, is essential for context-based mobile advertising. Existing fingerprint-based localization using only a single phone is susceptible to noise, and has a major limitation in that the phone has to be held in the hand at all times. In this paper, we present SensOrchestra, a collaborative sensing framework for symbolic location recognition that groups nearby phones to recognize ambient sounds and images of a location collaboratively. We investigated audio and image features, and designed a classifier fusion model to integrate estimates from different phones. We also evaluated the energy consumption, bandwidth, and response time of the system. Experimental results show that SensOrchestra achieved 87.7% recognition accuracy, which reduces the error rate of single-phone approach by 2X, and eliminates the limitations on how users carry their phones. We believe general location or activity recognition systems can all benefit from this collaborative framework.

Collaboration


Dive into the Heng-Tze Cheng's collaboration.

Top Co-Authors

Avatar

Martin L. Griss

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Feng-Tso Sun

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Senaka Buthpitiya

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Homer H. Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu-Ching Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge