Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seiji Igi is active.

Publication


Featured researches published by Seiji Igi.


ieee international conference on automatic face and gesture recognition | 1998

Color-based hands tracking system for sign language recognition

Kazuyuki Imagawa; Shan Lu; Seiji Igi

The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face.


international conference on pattern recognition | 2000

Recognition of local features for camera-based sign language recognition system

I. Imagawa; Hideaki Matsuo; Rin-ichiro Taniguchi; Daisaku Arita; Shan Lu; Seiji Igi

A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases.


Proceedings. IEEE Workshop on Knowledge Media Networking | 2002

Hand detection and tracking using pixel value distribution model for multiple-camera-based gesture interactions

Akira Utsumi; Nobuji Tetsutani; Seiji Igi

We present a vision-based hand tracking system for gesture-based man-machine interactions and a statistical hand detection method. Our hand tracking system employs multiple cameras to reduce occlusion problems. Non-synchronous multiple observations enhance system scalability. In the system, users can manipulate a virtual scene by using predefined gesture commands. We propose a statistical method to detect hand regions in images using geometrical structures involved in the appearances of the target objects. Most conventional gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Our method can describe and recognize the appearances of hands based on geometrical structures. Experimental results show the effectiveness of our method.


Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997

The Recognition Algorithm with Non-contact for Japanese Sign Language Using Morphological Analysis

Hideaki Matsuo; Seiji Igi; Shan Lu; Yuji Nagashima; Yuji Takata; Terutaka Teshima

This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing the space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns.


intelligent robots and systems | 2003

Semi-autonomous outdoor mobility support system for elderly and disabled people

Kentaro Kayama; Ikuko Eguchi Yairi; Seiji Igi

We have been developing the Robotic Communication Terminals (RCTs), which are integrated into a mobility support system to assist elderly or disabled people who suffer from impaired mobility. The RCT system consists of three types of terminals and one server: an environment-embedded terminal, a user-carried mobile terminal, a user-carrying mobile terminal, and a barrier-free map server. The RCT is an integrated system that can be used to cope with various problems of mobility, and provide suitable support to a wide variety of users. This paper provides an in-depth description of the user-carrying mobile terminal. The system itself is a kind of intelligent wheeled vehicle. It can recognize the surrounding 3D environment through infrared sensors, sonar sensors, and a stereo vision system with three cameras, and avoid hazards semi-autonomously. It also can provide adequate navigation by communicating with the geographic information system (GIS) server and detect vehicles appearing from the blind side by communicating with environment-embedded terminals in the real-world.


conference on computers and accessibility | 2000

Fast web by using updated content extraction and a bookmark facility

Tsuyoshi Ebina; Seiji Igi; Teruhisa Miyake

This paper describes improved methods of web access for the visually impaired. Some A few web access systems for the visually impaired have already been developed and are widely used. The improvements described in this paper are in two areas. The first is a fastest mean of jumping from the current sentence position into desired sentence position within web pages. The second is a facility for searching for sentences that have been updated since a previous viewing. User testing was carried out, and the two facilities were found to reduce not only the web page access time but also the user?s mental workload.


systems, man and cybernetics | 2003

Construction of elevation map for user-carried outdoor mobile robot using stereo vision

Kentaro Kayama; Ikuko Eguchi Yairi; Seiji Igi

The construction method of environment description using stereo camera is described in this paper. This method is for safety of user-carried semi-autonomous mobile robot in outdoor environment that contains steps and slopes. This method constructs environment map by using three-dimension occupancy grid. These grids are cuboids fine in height direction. Three-dimension flow is used for self-localization. The elevation map is constructed from the three-dimension occupancy grid, and the dangerous area is calculated from the elevation map. This make it enable to distinguish passable slopes and impassable steps. Moreover, this system is equipped for an outdoor semi-autonomous electric scooter and performed in real world.


asia pacific computer and human interaction | 1998

Graph access system for the visually impaired

Tsuyoshi Ebina; Seiji Igi; Teruhisa Miyake; Hiroko Takahashi

Swell paper had been widely used to confirm graphical representation for the visually impaired. However, most tactile graphs on such paper were created by people with normal vision, who, in most cases, do not fully understand how to translate the visual aspects of a graph into a representation that the visually impaired can grasp. The paper describes a graph presentation system for translating original data sequences of electronic documents into the graph presentation. This system generates graphical representation in a tactile form. Users can touch a pin array and get a rough image of the tactile graph, and iterate graph magnification to confirm the details of the graph. They can also confirm each plotted value with auditory feedback. Experiments show that users can perceive not only simple graphs, but also complex graphs like a stock chart. Subjects were able to perceive not only maximum and minimum values, but also statistical trends of the graphs.


international conference on pattern recognition | 2004

View-based detection of 3-D interaction between hands and real objects

Akira Utsumi; Nobuji Tetsutani; Seiji Igi

We propose a vision-based method to detect interactions between human hand(s) and real objects. Since humans perform various kinds of tasks with their hands, detection of hand-object interactions is useful for building intelligent systems that understand and support human activities. We use a statistical color model to detect hand regions in input images. Target objects are dynamically modeled based on their appearances by giving consideration to occlusions by the hand. The appearance model tracks the translation and relative rotation of target objects. This system is useful for recording, indexing and instructing object manipulations and/or hand-object interactions. Experimental results show the effectiveness of our method.


asian conference on computer vision | 1998

Real-Time Tracking of Human Hands from a Sign-Language Image Sequence

Kazuyuki Imagawa; Shan Lu; Seiji Igi

We have developed a real-time system which tracks the hands of a person doing sign language. The system enables us to track hands without markers or colored gloves even if the hands overlap the face. First, the system extracts the hand and face regions from the sign-language image sequence using an improved histogram backprojection. Next, the system tracks hands from blobs which are computed from both the extracted image and the time differential image. The system has been tested for hand tracking using both primitive motions and the actual motions of sign-language used by native signers. The experimental results indicate that the system is able to track hands while the hand overlaps the face.

Collaboration


Dive into the Seiji Igi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kentaro Kayama

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuji Nagashima

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge