Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shan Lu is active.

Publication


Featured researches published by Shan Lu.


ieee international conference on automatic face and gesture recognition | 1998

Color-based hands tracking system for sign language recognition

Kazuyuki Imagawa; Shan Lu; Seiji Igi

The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face.


international conference on pattern recognition | 2000

Recognition of local features for camera-based sign language recognition system

I. Imagawa; Hideaki Matsuo; Rin-ichiro Taniguchi; Daisaku Arita; Shan Lu; Seiji Igi

A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases.


Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997

The Recognition Algorithm with Non-contact for Japanese Sign Language Using Morphological Analysis

Hideaki Matsuo; Seiji Igi; Shan Lu; Yuji Nagashima; Yuji Takata; Terutaka Teshima

This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing the space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns.


asian conference on computer vision | 1998

Real-Time Tracking of Human Hands from a Sign-Language Image Sequence

Kazuyuki Imagawa; Shan Lu; Seiji Igi

We have developed a real-time system which tracks the hands of a person doing sign language. The system enables us to track hands without markers or colored gloves even if the hands overlap the face. First, the system extracts the hand and face regions from the sign-language image sequence using an improved histogram backprojection. Next, the system tracks hands from blobs which are computed from both the extracted image and the time differential image. The system has been tested for hand tracking using both primitive motions and the actual motions of sign-language used by native signers. The experimental results indicate that the system is able to track hands while the hand overlaps the face.


The Journal of The Institute of Image Information and Television Engineers | 2000

Human Interface. Recognition of Local Features for Camera-based Sign-Language Recognition System.

Kazuyuki Imagawa; Rin-ichiro Taniguchi; Daisaku Arita; Hideaki Matsuo; Shan Lu; Seiji Igi

A sign-language recognition system should use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We designed a system that first selects possible words by using the detected global features, then narrows the choices down to one by using the detected local features.In this paper, we describe an adequate local feature recognizer for a sign-language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols corresponding to clusters by using a clustering technique. The clusters are created from a training set of extracted hand images so that images with a similar appearance can be classified into the same cluster in an eigenspace. Experimental results showed that our system can recognize a signed word even in two-handed and hand-to-hand contact cases.


Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997

Towards a Dialogue System Based on Recognition and Synthesis of Japanese Sign Language

Shan Lu; Seiji Igi; Hideaki Matsuo; Yuji Nagashima

This paper describes a dialogue system based on the recognition and synthesis of Japanese sign language. The purpose of this system is to support conversation between people with hearing impairments and hearing people. The system consists of five main modules: sign-language recognition and synthesis, voice recognition and synthesis, and dialogue control. The sign-language recognition module uses a stereo camera and a pair of colored gloves to track the movements of the signer, and sign-language synthesis is achieved by regenerating the motion data obtained by an optical motion capture system. An experiment was done to investigate changes in the gaze-line of hearing-impaired people when they read sign language, and the results are reported.


GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction | 1999

Active Character: Dynamic Reaction to the User

Shan Lu; Seiji Igi

This paper describes a computer-character system intended to create a natural interaction between the computer and the user. Using predefined control rules, it generates the movements of the computer characters head, body, hands, and gaze-lines according to changes in the users position and gaze-lines. This system acquires the users information about the users position, facial region, and gaze-lines by using a vision subsystem and an eye-tracker unit. The vision subsystem detects the presence of a person, estimates the three-dimensional position of the person by using information acquired by a stationary camera, and determines the locations of the face and hands. The reactive motions of the computer character are generated according to a set of predefined if-then rules. Furthermore, a motion-description file is designed to define simple and complex kinds of gestures.


Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers | 2000

Recognition of local features for camera-based sign-language recognition system

Kazuyuki Imagawa; Hideaki Matsuo; Rin-ichiro Taniguchi; Daisaku Arita; Shan Lu; Seiji Igi


Lecture Notes in Computer Science | 1998

Towards a dialogue system based on recognition and synthesis of Japanese sign language

Shan Lu; S. Igi; H. Matsuo; Y. Nagashima


asian conference on computer vision | 2000

Appearance-based Recognition of Hand Shapes for Sign Language in Low Resolution Image

Kazuaki Imagawa; Rin-ichiro Taniguchi; Daisaku Arita; Hideaki Matsuo; Shan Lu; Seiji Igi; 和幸 今川; 倫一郎 谷口; 大作 有田; 英明 松尾; 誠二 猪木

Collaboration


Dive into the Shan Lu's collaboration.

Top Co-Authors

Avatar

Seiji Igi

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuji Nagashima

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge