Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nobuji Tetsutani is active.

Publication


Featured researches published by Nobuji Tetsutani.


Image and Vision Computing | 2004

Detection and tracking of eyes for gaze-camera control

Shinjiro Kawato; Nobuji Tetsutani

Abstract A head-off gaze-camera needs eye location information for head-free usage. For this purpose, we propose new algorithms to extract and track the positions of eyes in a real-time video stream. For extraction of eye positions, we detect blinks based on the differences between successive images. However, eyelid regions are fairly small. To distinguish them from dominant head movement, we elaborate a head movement cancellation process. For eye-position tracking, we use a template of ‘Between-the-Eyes,’ which is updated frame-by-frame, instead of the eyes themselves. Eyes are searched based on the current position of ‘Between-the-Eyes’ and their geometrical relations to the position in the previous frame. The ‘Between-the-Eyes’ pattern is easier to locate accurately than eye patterns. We implemented the system on a PC with a Pentium III 866-MHz CPU. The system runs at 30 frames/s and robustly detects and tracks the eyes.


IEICE Transactions on Information and Systems | 2005

Scale-Adaptive Face Detection and Tracking in Real Time with SSR Filters and Support Vector Machine*This paper was presented at ACCV 2004.

Shinjiro Kawato; Nobuji Tetsutani; Kenichi Hosaka

In this paper, we propose a method for detecting and tracking faces in video sequences in real time. It can be applied to a wide range of face scales. Our basic strategy for detection is fast extraction of face candidates with a Six-Segmented Rectangular (SSR) filter and face verification by a support vector machine. A motion cue is used in a simple way to avoid picking up false candidates in the background. In face tracking, the patterns of between-the-eyes are tracked while updating the matching template. To cope with various scales of faces, we use a series of approximately 1/√2 scale-down images, and an appropriate scale is selected according to the distance between the eyes. We tested our algorithm on 7146 video frames of a news broadcast featuring sign language at 320 × 240 frame size, in which one or two persons appeared. Although gesturing hands often hid faces and interrupted tracking, 89% of faces were correctly tracked. We implemented the system on a PC with a Xeon 2.2-GHz CPU, running at 15 frames/second without any special hardware.


computer vision and pattern recognition | 2003

Human Factors Evaluation of a Vision-Based Facial Gesture Interface

Gamhewage Chaminda de Silva; Michael J. Lyons; Shinjiro Kawato; Nobuji Tetsutani

We adapted a vision-based face tracking system for cursor control by head movement. An additional vision-based algorithm allowed the user to enter a click by opening the mouth. The Fitts law information throughput rate of cursor movements was measured to be 2.0 bits/sec with the ISO 9241-9 international standard method for testing input devices. A usability assessment was also conducted and we report and discuss the results. A practical application of this facial gesture interface was studied: text input using the Dasher system, which allows a user to type by moving the cursor. The measured typing speed was 7-12 words/minute, depending on level of user expertise. Performance of the system is compared to a conventional mouse interface.


robot and human interactive communication | 2001

Dynamic micro aspects of facial movements in elicited and posed expressions using high-speed camera

Shigeo Morishima; Tatsuo Yotsukura; Hiroshi Yamada; Hideko Uchida; Nobuji Tetsutani; Shigeru Akamatsu

The presented study investigated the dynamic aspects of facial movements in spontaneously elicited and posed facial expressions of emotion. We recorded participants facial movements when they were shown a set of emotional eliciting films, and when they posed typical facial expressions. Those facial movements were recorded by a high-speed camera of 250 frames per second. We measured facial movements frame by frame in terms of displacements of facial feature points. Such micro-temporal analysis showed that, although it was very subtle, there exits the characteristic onset asynchrony of each parts movement. Furthermore, it was found the commonality of each parts movement in temporal change although the speed and the amount of each movement varied along with expressional conditions and emotions.


international conference on advanced learning technologies | 2004

Enhancing Web-based learning by sharing affective experience

Michael J. Lyons; Daniel Kluender; Nobuji Tetsutani

We suggest that the real-time visual display of affective signals such as respiration, pulse, and skin conductivity can allow users of Web-based tutoring systems insight into each others felt bodily experience. This allows remotely interacting users to increase their experience of empathy, or shared feeling and has the potential to enhance telelearning. We describe an implementation of such a system and present the results of a preliminary experiment in the context of a Web-based system for tutoring the writing of Chinese characters.


pacific rim international conference on artificial intelligence | 2004

Vision based acquisition of mouth actions for human-computer interaction

Gamhewage Chaminda de Silva; Michael J. Lyons; Nobuji Tetsutani

We describe a computer vision based system that allows use of movements of the mouth for human-computer interaction (HCI). The lower region of the face is tracked by locating and tracking the position of the nostrils. The location of the nostrils determines a sub-region of the image from which the cavity of the open mouth may be segmented. Shape features of the open mouth can then be used for continuous real-time data input, for human-computer interaction. Several applications of the head-tracking mouth controller are described.


Archive | 2005

Video content creating apparatus

Kazuhiro Kuwabara; Noriaki Kuwahara; Kiyoshi Yasuda; Shinji Abe; Nobuji Tetsutani


ITE Technical Report 25.12 | 2001

CIRCLE-FREQUENCY FILTER AND ITS APPLICATION

Shinjiro Kawato; Nobuji Tetsutani


Educational Technology & Society | 2005

Supporting Empathy in Online Learning with Artificial Expressions

Michael J. Lyons; Daniel Kluender; Nobuji Tetsutani


Archive | 2004

Sensory drawing apparatus

Shunsuke Yoshida; Jun Kurumisawa; Haruo Noma; Nobuji Tetsutani

Collaboration


Dive into the Nobuji Tetsutani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Kurumisawa

Chiba University of Commerce

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noriaki Kuwahara

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge