Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toshiaki Ejima is active.

Publication


Featured researches published by Toshiaki Ejima.


Journal of Visual Languages and Computing | 1999

Visual Recognition of Static/Dynamic Gesture

Byung-Woo Min; Ho-Sub Yoon; Jung Soh; Takeshi Ohashi; Toshiaki Ejima

This paper presents the visual recognition of static gesture (SG) or dynamic gesture (DG). Gesture is one of the most natural interface tools for human?computer interaction (HCI) as well as for communication between human beings. In order to implement a human-like interface, gestures could be recognized using only visual information such as the visual mechanism of human beings; SGs and DGs can be processed concurrently as well. This paper aims at recognizing hand gestures obtained from the visual images on a 2D image plane, without any external devices. Gestures are spotted by a task-specific state transition based on natural human articulation. SGs are recognized using image moments of hand posture, while DGs are recognized by analyzing their moving trajectories on the hidden Markov models (HMMs). We have applied our gesture recognition approach to gesture-driven editing systems operating in real time.


machine vision applications | 2002

Tunnel crack detection and classification system based on image processing

Zhiwei Liu; Shahrel A Suandi; Takeshi Ohashi; Toshiaki Ejima

In this paper, an efficient tunnel crack detection and recognition method is proposed. It combines the analysis of crack intensity feature and the application of Support Vector Machine algorithm. At first, the original image is transformed into a binary image. Based on two thresholds technique, the object edge image can be obtained. Then assuming the image can be separated to some local images, we catagorize the local image into three types of pattern. They are the crack, non-crack and intermediate type, which have both of the two properties. A trainable classifier is built to classify these patterns. During this process, Balanced sub-images that satisfy for the two centers of geometric and gravity, are used as a trainable sample for the classifier. This leads to an effective classification system.


ieee international conference on cognitive informatics | 2006

Real-Time hand Gesture Recognition Using Pseudo 3-D Hidden Markov Model

Nguyen Dang Binh; Toshiaki Ejima

In the following work we present a new approach to recognition of hand gesture based on pseudo three-dimensional hidden Markov model (P3DHMM), a technique which can integrate spatial as well as temporal derived features in an elegant and efficient way. Additionally, robust and flexible hand gesture tracking using an appearance-based condensation tracker. These allow the recognition of dynamic gestures as well as more static gestures. Furthermore, there has been proposed to improve the overall performance of the approach: replace Baum-Welch algorithm with clustering algorithm, adding a clustering performance measure to the clustering algorithm and adaptive threshold gesture to remove non-gesture pattern that helps to qualify an input pattern as a gesture. Proposed improving methods along with the P3DHMM was used to develop a complete Japanese Kana hand alphabet recognition system consisting of 42 static postures and 34 hand motions. We obtained a recognition rate of 99.1% in the gesture recognition experiments when compared to P2DHMMs


Engineering Applications of Artificial Intelligence | 1999

Gesture-based editing system for graphic primitives and alphanumeric characters

Byung-Woo Min; Ho-Sub Yoon; Jung Soh; Takeshi Ohashi; Toshiaki Ejima

Abstract This paper presents a system that edits graphic primitives and alphanumeric characters using hand gestures in natural environments. Gesture is one of the most natural means of enhancing human-computer interaction (HCI) techniques to the level of human communication. This research aims to recognize one-stroke pictorial gestures from visual images, and to develop a graphic/text editing system running in real time. The tasks are performed through three steps: moving-hand tracking and trajectory generation, key-gesture segmentation, and gesture recognition by analyzing dynamic features. A gesture vocabulary consists of 48 gestures of three types: (1) six editing commands; (2) six graphic primitives; and (3) alphanumeric characters—26 alphabetic and 10 numerical. A meaningful gesture part is segmented by a spotter using the phase-based velocity constraints. Some dynamic features are obtained from spatio-temporal trajectories and quantized by the K-means algorithm. The quantized vectors were trained and tested using hidden Markov models (HMMs). Experimental results show about 88.5% correct recognition rate, and also the practical potential for applying these techniques to several areas such as the remote control of robots or of electronic household appliances, or object manipulation in VR systems.


computer analysis of images and patterns | 2005

Magnitude and phase spectra of foot motion for gait recognition

Agus Santoso Lie; Shuichi Enokida; Tomohito Wada; Toshiaki Ejima

Magnitude and phase spectra of horizontal and vertical movement of ankles in a normal walk are effective and efficient signatures in gait recognition. An approach to use these spectra as phase-weighted magnitude spectra is also widely known. In this paper, we propose an integration of magnitude and phase spectra for gait recognition using AdaBoost classifier. At each round, a weak classifier evaluates each magnitude and phase spectra of a motion signal as dependent sub-features, then classification results of each sub-feature are normalized and summed for the final hypothesis output. Experimental results in same-day and cross-month tests with nine subjects show that using both magnitude and phase spectra improves the recognition results.


pacific conference on computer graphics and applications | 1999

Motion generator approach to translating human motion from video to animation

Tsukasa Noma; I. Oishi; Hiroshi Futsuhara; Hiromi Baba; Takeshi Ohashi; Toshiaki Ejima

This paper purpose a motion generator approach to translating human motion from video image sequences to computer animations in real-time. In the motion generator approach, a motion generator makes inferences on the current human motion and posture from the data obtained by processing the source video source, and then generates and sends a set of joint angles to the target human body model. Compared with the existing motion capture approach, our approach is more robust, and tolerant of broader environmental and postural conditions. Experiments on a prototype system show that an animated virtual human can walk, sit, and lie as the real human performs without special illuminations control.


Lecture Notes in Computer Science | 2005

Gait recognition using spectral features of foot motion

Agus Santoso Lie; Ryo Shimomoto; Shohei Sakaguchi; Toshiyuki Ishimura; Shuichi Enokida; Tomohito Wada; Toshiaki Ejima

Gait as a motion-based biometric has the merit of being non-contact and unobtrusive. In this paper, we proposed a gait recognition approach using spectral features of horizontal and vertical movement of ankles in a normal walk. Gait recognition experiments using the spectral features in term of the magnitude, phase and phase-weighted magnitude show that both magnitude and phase spectra are effective gait signatures, but magnitude spectra are slightly superior. We also proposed the use of geometrical mean based spectral features for gait recognition. Experimental results with 9 subjects show encouraging results in the same-day test, while the effect of time covariate is confirmed in the cross-month test.


electronic imaging | 2002

HeadFinder: a real-time robust head detection and tracking system

Naruatsu Baba; Hideaki Matsuo; Toshiaki Ejima

In this paper, a real-time head detection and tracking system called HeadFinder is proposed. HeadFinder is a robust system which detects heads of people appeared in video images and track them. For the sake of effective detection we pay attention to motion and shape of a head, both of which are robust features to noise in video images. Since what the moving circle is a head is almost always true in our life space, we utilized it to detect heads. First, we detect outline of moving people in difference images between two consecutive video frames. Next, for the sake of circle detection, we use Hough transform which is known as a robust shape detection method. After the position and size (radius) of the detected circle are registered as a head model, HeadFinder switches to tracking phase. In order to raise the efficiency of tracking, we predict the domain where head will move. The size of predicted domain is proportional to the reliability of the head model, that is, the number of times of pursuit successes by present. Performances of HeadFinder in indoor and outdoor environment, are examined. Through experiments, we confirmed that HeadFinder works robustly against environment change and works well in real-time by a simple hardware.


systems man and cybernetics | 1999

Stochastic field model for autonomous robot learning

Shuichi Enokida; Takeshi Ohashi; Takaichi Yoshida; Toshiaki Ejima

Through reinforcement learning, an autonomous robot creates an optimal policy which maps state space to action space. The mapping is obtained by trial and error through the interaction with a given environment. The mapping is represented as an action-value function. The environment accords an information in the form of scalar feedback known as a reinforcement signal. As a result of reinforcement learning, an action has the high action-value in each state. The optimal policy is equivalent to choosing an action which has the highest action-value in each state. Typically, even if an autonomous robot has continuous sensor values, the summation of discrete values is used as an action-value function to reduce learning time. However, the reinforcement learning algorithms including Q-learning suffer from errors due to state space sampling. To overcome the above, we propose an EQ-learning (extended Q-learning) based on a SFM (stochastic field model). EQ-learning is designed in order to accommodate continuous state space directly and to improve its generalization capability. Through EQ-learning, an action-value function is represented by the summation of weighted base functions, and an autonomous robot adjusts weights of base functions at learning stage. Other parameters (center coordinates, variance and so on) are adjusted at the unification stage where two similar functions are unified to a simpler function.


International Journal on Document Analysis and Recognition | 1998

Filtering and segmentation of digitized land use map images

Rafael Santos; Takeshi Ohashi; Takaichi Yoshida; Toshiaki Ejima

Abstract. One important step in the analysis of digitized land use map images is the separation of the information in layers. In this paper we present a technique called Selective Attention Filter which is able to extract or enhance some features of the image that correspond to conceptual layers in the map by extracting information from results of clustering of local regions on the map. Different parameters can be used to extract or enhance different information on the image. Details on the algorithm, examples of application of the filter and results are also presented.

Collaboration


Dive into the Toshiaki Ejima's collaboration.

Top Co-Authors

Avatar

Shuichi Enokida

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Takeshi Ohashi

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Toyohiro Hayashi

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Takaichi Yoshida

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nguyen Dang Binh

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Agus Santoso Lie

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tsukasa Noma

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ho-Sub Yoon

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jung Soh

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge