Rung-Huei Liang
National Taiwan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rung-Huei Liang.
ieee international conference on automatic face and gesture recognition | 1998
Rung-Huei Liang; Ming Ouhyoung
A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,.
The Visual Computer | 2006
Fu-Che Wu; Wan-Chun Ma; Rung-Huei Liang; Bing-Yu Chen; Ming Ouhyoung
In previous research, three main approaches have been employed to solve the skeleton extraction problem: medial axis transform (MAT), generalized potential field and decomposition-based methods. These three approaches have been formulated using three different concepts, namely surface variation, inside energy distribution, and the connectivity of parts. By combining the above mentioned concepts, this paper creates a concise structure to represent the control skeleton of an arbitrary object.First, an algorithm is proposed to detect the end, connection and joint points of an arbitrary 3D object. These three points comprise the skeleton, and are the most important to consider when describing it. In order to maintain the stability of the point extraction algorithm, a prong-feature detection technique and a level iso-surfaces function-based on the repulsive force field was employed. A neighborhood relationship inherited from the surface able to describe the connection relationship of these positions was then defined. Based on this relationship, the skeleton was finally constructed and named domain connected graph (DCG). The DCG not only preserves the topology information of a 3D object, but is also less sensitive than MAT to the perturbation of shapes. Moreover, from the results of complicated 3D models, consisting of thousands of polygons, it is evident that the DCG conforms to human perception.
Computer Graphics Forum | 1995
Rung-Huei Liang; Ming Ouhyoung
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural way in a virtual reality system. Because of its intuitiveness and its capability of helping the hearing impaired or speaking impaired, we develop a gesture recognition system. Considering the world‐wide use of ASL (American Sign Language), this system focuses on the recognition of a continuous flow of alphabets in ASL to spell a word followed by the speech synthesis, and adopts a simple and efficient windowed template matching recognition strategy to achieve the goal of a real‐time and continuous recognition. In addition to the abduction and the flex information in a gesture, we introduce a concept of contact‐point into our system to solve the intrinsic ambiguities of some gestures in ASL. Five tact switches, served as contact‐points and sensed by an analogue to digital board, are sewn on a glove cover to enhance the functions of a traditional data glove.
pacific conference on computer graphics and applications | 2003
Pin-Chou Liu; Fu-Che Wu; Wan-Chun Ma; Rung-Huei Liang; Ming Ouhyoung
A method is proposed in this paper to automatically generate the animation skeleton of a model such that the model can be manipulated according to the skeleton. With our method, users can construct the skeleton in a short time, and bring a static model both dynamic and alive. The primary steps of our method are finding skeleton joints, connecting the joints to form an animation skeleton, and binding skin vertices to the skeleton. Initially, a repulsive force field is constructed inside a given model, and a set of points with local minimal force magnitude are found based on the force field. Then, a modified thinning algorithm is applied to generate an initial skeleton, which is further refined to become the final result. When the skeleton construction completes, skin vertices are anchored to the skeleton joints according to the distances between the vertices and joints. In order to build the repulsive force field, hundreds of rays are shot radially from positions inside the model, and it leads to that the force field computation takes most of the execution time. Therefore, an octree structure is used to accelerate this process. Currently, the skeleton generated from a typical 3D model with 1000 to 10000 polygons takes less than 2 minutes on a Intel Pentium 4 2.4 GHz PC.
Displays | 1994
Rung-Huei Liang; Ming Ouhyoung
Abstract This paper presents the ‘Impromptu Conductor’ virtual reality system which combines computer graphics and music, with the emphasis on computer music. We introduce the supervised learning method from the field of pattern recognition into the reproduction and the organization of music. We have proposed and implemented a practical way of capturing human motion to create music as well as generating the corresponding images on a screen by using a 6D tracker to simulate the users hand in real time. However, the mapping between music and hand motion in our system is not a simple one-to-one function, and is constrained and properly modified by music styles collected from supervised learning. Thus the music style produced by an interactive user strongly depends on the movement of their hands. The system implemented demonstrates that the system designer can give different styles of feedback to different patterns of a users behaviour.
Computer Graphics Forum | 1996
Ming Ouhyoung; Yung-Yu Chuang; Rung-Huei Liang
Because of the view independence and photo realistic image generation in a diffuse environment, radiosity is suitable for an interactive walk through system. The drawback of radiosity is that it is time‐consuming in form factor estimation, and furthermore, inserting, deleting or moving an object makes the whole costly rendering process repeat itself. To solve this problem, we encapsulate necessary information for form factor calculation and visibility estimation in each object, which is called a reusable radiosity object. An object is defined as a cluster or clusters of triangles. Whenever a scene updates, the radiosity algorithm looks up the prestored information in each object, thus speeding itself up by two orders of magnitude. Besides, solution time based on cluster representatives is linear to the number of objects since each object is reusable, encapsulated with preprocessed data in every level of hierarchy. We also analyze the unregarded error on visibility estimation and propose a statistically optimal adaptive algorithm to maintain the same error for each link.
wireless communications and networking conference | 2007
Sheng-Cheng Yeh; Wu-Hsiao Shyu; Ching-Hui Chen; Rung-Huei Liang
Recently, the development of mobile services is becoming more popular. The mobile applications will not only take advantage of contextual information, such as location-awareness, to offer greater services to a mobile host (MH) but maintain existing transport-layer connections as the MH moves from one location to another. This paper exhausts our most recent work: the AcoustaNomad project. AcoustaNomad not only uses the mobile IPv6 to maintain the existing connections even if the MH changes locations and addresses, but utilizes location-aware technique to detect what kind of services the new location provides. In addition, AcoustaNomad shows two mature mobile applications: mobile learning and audio blogging. This paper proposes the architecture of AcoustaNomad and experimental results that demonstrate the ability of AcoustaNomad to enable location-aware services and applications.
Journal of Computer Science and Technology | 2004
Pei-Hsuan Tu; I-Chen Lin; Jeng-Sheng Yeh; Rung-Huei Liang; Ming Ouhyoung
In this paper, a facial animation system is proposed for capturing both geometrical information and illumination changes of surface details, called expression details, from video clips simultaneously, and the captured data can be widely applied to different 2D face images and 3D face models. While tracking the geometric data, we record the expression details by ratio images. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures. Because a ratio image is obtained via dividing colors of an expressive face by those of a neutral face, pixels with ratio value smaller than one are where a wrinkle or crease appears. Therefore, the gradients of the ratio value at each pixel in ratio images are regarded as changes of a face surface, and original normals on the surface can be adjusted according to these gradients. Based on this idea, we can convert the ratio images into a sequence of normal maps and then apply them to animated 3D model rendering. With the expression detail mapping, the resulted facial animations are more life-like and more expressive.
international conference on computer graphics and interactive techniques | 2004
Fu-Che Wu; Bing-Yu Chen; Rung-Huei Liang; Ming Ouhyoung
In this paper, a simple and robust prong features detection algorithm is proposed. A prong feature is an assisting feature that can be used in many applications. For instance, it can be used to identify a model that consists of several prong parts for model decomposition. It represents a useful feature for skeleton extraction as well as a comparable feature for object matching. In addition, it could also be a fast alignment feature for model alignment and morphing. Furthermore, it is an invariant feature for mesh simplification.
Journal of Information Science and Engineering | 2013
Tse-Hsien Wang; Bing-Yu Chen; Rung-Huei Liang
In this paper, an interactive dynamic background scene generating and editing system is proposed based on improved motion graph. By analyzing the input motions with limited frame length and their metadata, our system could synthesize a large amount of various motions to yield a composing dynamic background scene with unlimited frame length by connecting the motion pieces through smooth transitions based on their motion graph layers. The motion pieces are generated by segmenting the input motions and corresponding deforming meshes spatially and temporally, while the smooth transitions connected the motion pieces are obtained by searching the best path in the motion graph layers according to the specified circumstances. Finally, the result motions are optimized by repeatedly substituting the motion sub-sequences. To design the dynamic background scene, users can interactively specify some physical constraints of the environment on the keyframes, such as wind direction or velocity of flow, or even some simple paths for characters to follow, and the system can automatically generate a continuous and natural dynamic background scene in accordance with the user-specified environment constraints.