Sylvie C. W. Ong
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sylvie C. W. Ong.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005
Sylvie C. W. Ong; Surendra Ranganath
Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes which result in systematic variations in sign appearance are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems--dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendicies) which contain several illustrative examples and videos of signing can be found at www.computer.org/publications/dlib.
The International Journal of Robotics Research | 2010
Sylvie C. W. Ong; Shao Wei Png; David Hsu; Wee Sun Lee
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to scale up POMDP algorithms for complex robotic tasks. Robotic systems often have mixed observability : even when a robot’s state is not fully observable, some components of the state may still be so. We use a factored model to represent separately the fully and partially observable components of a robot’s state and derive a compact lower-dimensional representation of its belief space. This factored representation can be combined with any point-based algorithm to compute approximate POMDP solutions. Experimental results show that on standard test problems, our approach improves the performance of a leading point-based POMDP algorithm by many times.
robotics science and systems | 2009
Sylvie C. W. Ong; Shao Wei Png; David Hsu; Wee Sun Lee
Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for motion planning of autonomous robots in uncertain and dy- namic environments. They have been successfully applied to various robotic tasks, but a major challenge is to scale up POMDP algorithms for more complex robotic systems. Robotic systems often have mixed observability: even when a robots state is not fully observable, some components of the state may still be fully observable. Exploiting this, we use a factored model to represent separately the fully and partially observable components of a robots state and derive a compact lower- dimensional representation of its belief space. We then use this factored representation in conjunction with a point-based algo- rithm to compute approximate POMDP solutions. Separating fully and partially observable state components using a factored model opens up several opportunities to improve the efficiency of point-based POMDP algorithms. Experiments show that on standard test problems, our new algorithm is many times faster than a leading point-based POMDP algorithm.
ieee international conference on automatic face gesture recognition | 2004
Sylvie C. W. Ong; Surendra Ranganath
Grammatical information conveyed through systematic temporal and spatial movement modifications is an integral aspect of sign language communication. We propose to model these systematic variations as simultaneous channels of information. Classification results at the channel level are output to Bayesian networks which recognize both the basic gesture meaning and the grammatical information (here referred to as layered meanings). With a simulated vocabulary of 6 basic signs and 5 possible layered meanings, test data for eight test subjects was recognized with 85.0% accuracy. We also adapt a system trained on three test subjects to recognize gesture data from a fourth person, based on a small set of adaptation data. We obtained gesture recognition accuracy of 88.5% which is a 75.7% reduction in error rate as compared to the unadopted system.
Pattern Recognition | 2006
Sylvie C. W. Ong; Surendra Ranganath; Y. V. Venkatesh
Sign language communication includes not only lexical sign gestures but also grammatical processes which represent inflections through systematic variations in sign appearance. We present a new approach to analyse these inflections by modelling the systematic variations as parallel channels of information with independent feature sets. A Bayesian network framework is used to combine the channel outputs and infer both the basic lexical meaning and inflection categories. Experiments using a simulated vocabulary of six basic signs and five different inflections (a total of 20 distinct gestures) obtained from multiple test subjects yielded 85.0% recognition accuracy. We also propose an adaptation scheme to extend a trained system to recognize gestures from a new person by using only a small set of data from the new person. This scheme yielded 88.5% recognition accuracy for the new person while the unadapted system yielded only 52.6% accuracy.
analysis and modeling of faces and gestures | 2007
Sylvie C. W. Ong; Surendra Ranganath
This paper addresses an aspect of sign language (SL) recognition that has largely been overlooked in previous work and yet is integral to signed communication. It is the most comprehensive work to-date on recognizing complex variations in sign appearances due to grammatical processes (inflections) which systematically modulate the temporal and spatial dimensions of a root sign word to convey information in addition to lexical meaning. We propose a novel dynamic Bayesian network - the Multichannel Hierarchical Hidden Markov Model (MH-HMM)- as a modelling and recognition framework for continuously signed sentences that include modulated signs. This models the hierarchical, sequential and parallel organization in signing while requiring synchronization between parallel data streams at sign boundaries. Experimental results using particle filtering for decoding demonstrate the feasibility of using the MH-HMM for recognizing inflected signs in continuous sentences.
International Gesture Workshop | 2003
Sylvie C. W. Ong; Surendra Ranganath
Automatic sign language recognition research has largely not addressed an integral aspect of sign language communication – grammatical inflections which are conveyed through systematic temporal and spatial movement modifications. We propose to use low-level static and dynamic classifiers, together with Bayesian Networks, to classify gestures that include these inflections layered on top of the basic meaning. With a simulated vocabulary of 6 basic signs and 4 different layered meanings, test data for four test subjects was classified with 84.6% accuracy.
international conference on pattern recognition | 2002
Sylvie C. W. Ong; Surendra Ranganath; Y. V. Venkatesh
Signs produced by gestures (such as in American Sign Language) can have a basic meaning coupled with additional meanings that are layered over the basic meaning of the sign. These layered meanings are conveyed by temporal and spatial modification of the basic form of the gesture movement. The work reported in this paper seeks to recognize temporal and spatial modifiers of hand movement and integrates them with recognition of the basic meaning of the sign. To this end, a Bayesian network framework is explored with a simulated vocabulary of 4 basic signs which give rise to 14 different combinations of basic meanings and layered meanings. Recognition accuracies of up to 88.2% were obtained.
international conference on control, automation, robotics and vision | 2002
Sylvie C. W. Ong; Surendra Ranganath; Y. V. Venkatesh
Signs produced by gestures (such as in American Sign Language) can have a basic meaning coupled with additional meanings that are like layers added to the basic meaning of the sign. These layered meanings are conveyed by Systematic temporal and spatial modification of the basic form of the gesture. The work reported in this paper seeks to recognize temporal and spatial modifiers of hand movement and integrates them with the recognition of the basic meaning of the sign. To this end, a Bayesian network framework is explored with a simulated vocabulary of 4 basic signs which give rise to 14 different combinations of basic meanings and layered meanings. In this paper we approached the problem of deciphering layered meanings by drawing analogies to the gesture parameters in Parametric HMM which represent systematic spatial modifications to gesture movement. Various Bayesian network structures were compared for recognizing the signs with layered meanings. The best performing network yielded 85.5% accuracy.
national conference on artificial intelligence | 2010
Li Ling Ko; David Hsu; Wee Sun Lee; Sylvie C. W. Ong