Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazuhiro Fukui is active.

Publication


Featured researches published by Kazuhiro Fukui.


ieee international conference on automatic face and gesture recognition | 1998

Face recognition using temporal image sequence

Osamu Yamaguchi; Kazuhiro Fukui; Kenichi Maeda

We present a face recognition method using image sequence. As input we utilize plural face images rather than a single-shot, so that the input reflects variation of facial expression and face direction. For the identification of the face, we essentially form a subspace with the image sequence and apply the Mutual Subspace Method in which the similarity is defined by the angle between the subspace of input and those of references. We demonstrate the effectiveness of the proposed method through several experimental results.


ISRR | 2005

Face Recognition Using Multi-viewpoint Patterns for Robot Vision

Kazuhiro Fukui; Osamu Yamaguchi

This paper introduces a novel approach for face recognition using multiple face patterns obtained in various views for robot vision. A face pattern may change dramatically due to changes in the relation between the positions of a robot, a subject and light sources. As a robot is not generally able to ascertain such changes by itself, face recognition in robot vision must be robust against variations caused by the changes. Conventional methods using a single face pattern are not capable of dealing with the variations of face pattern. In order to overcome the problem, we have developed a face recognition method based on the constrained mutual subspace method (CMSM) using multi-viewpoint face patterns attributable to the movement of a robot or a subject. The effectiveness of our method for robot vision is demonstrated by means of a preliminary experiment.


workshop on applications of computer vision | 1992

Multiple object tracking system with three level continuous processes

Kazuhiro Fukui; Hiroaki Nakai; Yoshinori Kuno

Reports a system for detecting human like moving objects in time-varying images. The authors show how it is possible to detect the image trajectories of people moving in ordinary indoor scenes. The system consists of three subprocesses: changing region detection, moving object tracking and movement interpretation. The processes are executed in parallel so hat each one can recover from the others errors. This ensures the reliable detection of the trajectories in difficult cases such as movement across complicated backgrounds. The authors have built a trial detection system using a parallel image processing system. The details of the trial system and experimental results of walking person detection are described.<<ETX>>


ieee international conference on automatic face gesture recognition | 2004

Robust lip contour extraction using separability of multi-dimensional distributions

Tomokazu Wakasugi; Masahide Nishiura; Kazuhiro Fukui

We present a lip contour extraction method using separability of color intensity distributions. Usually it is difficult to robustly extract the outer lip contour mainly because of the following two problems. First, the outer lip contour is often blurred. Secondly, the contrast between the skin and the lip region is often reduced by transformation from the color intensity to the gray scale intensity. To overcome these two problems we propose an edge detection method in which edge strength is defined as separability of two color intensity distributions. We apply the proposed method to lip contour extraction using an active contour model. We present several experimental results demonstrating the effectiveness of the proposed method.


Lecture Notes in Computer Science | 2004

Towards 3-Dimensional Pattern Recognition

Kenichi Maeda; Osamu Yamaguchi; Kazuhiro Fukui

3-dimensional pattern recognition requires the definition of a similarity measure between 3-dimensional patterns. We discuss how to match 3-dimensional patterns, which are represented by a set of images taken from multiple directions and approximately represented by subspaces. The proposed method is to calculate the canonical angles, in particular the third smallest angle between two subspaces. We demonstrate the viability of the proposed method by performing a pilot study of face recognition.


machine vision applications | 2004

Ship identification in sequential ISAR imagery

Atsuto Maki; Kazuhiro Fukui

Abstract.We have developed an online system that automatically identifies ships observed in a rapidly updating sequence of range-Doppler images produced by inverse synthetic aperture radar (ISAR). In the system, in order to cope with the invariable noise due to the physics of imaging, we propose to employ a multiframe image processing algorithm that stably extracts profiling as a basic feature reflecting all characteristics of a target. For ship identification, representing the extracted profiles as high-dimensional vectors, we adapt the vector analysis using the recently proposed constrained mutual subspace method (CMSM). The system currently works on an ordinary PC at 5 frames/s and achieves feasible performance of identification. The system is verified using simulated data.


european conference on computer vision | 2002

Constructing Illumination Image Basis from Object Motion

Akiko Nakashima; Atsuto Maki; Kazuhiro Fukui

We propose to construct a 3D linear image basis which spans an image space of arbitrary illumination conditions, from images of a moving object observed under a static lighting condition. The key advance is to utilize the object motion which causes illumination variance on the object surface, rather than varying the lighting, and thereby simplifies the environment for acquiring the input images. Since we then need to re-align the pixels of the images so that the same view of the object can be seen, the correspondence between input images must be solved despite the illumination variance. In order to overcome the problem, we adapt the recently introduced geotensity constraint that accurately governs the relationship between four or more images of a moving object. Through experiments we demonstrate that equivalent 3D image basis is indeed computable and available for recognition or image rendering.


Systems and Computers in Japan | 1995

Detection of moving objects with three‐level continuous modules

Hiroaki Nakai; Kazuhiro Fukui; Yoshinori Kuno

This paper proposes a method of dynamic image processing which can automatically detect a moving object from the image of a walking human. The system is composed of the three-level functional modules, which are the detection of the changing region, tracking of the moving object and the interpretation of the motion. Those modules employ simple processing, and the precision of the individual processing is not high. The system as a whole, however, can realize a high reliability because of the error-recovery among the modules. n n n nThe proposed method is implemented on the multiprocessor system, and a moving object detection system is constructed which can extract in real-time the locus of the motion on the image. In the past moving object detection method, there must be imposed some constraints on the scene which is the object of processing. However, the proposed experimental system operates with stability in the general environment. An experiment was conducted using the experimental system to extract the loci of motion for the passersby in the general store, and the method is shown to be effective in the general environment.


international conference on image analysis and processing | 2001

ISAR image analysis by subspace method: automatic extraction and identification of ship profile

Atsuto Maki; Kazuhiro Fukui; Kazunori Onoguchi; Kenichi Maeda

This paper deals with automatic identification of ships in images produced by inverse synthetic aperture radar (ISAR). The ISAR technique reconstructs a rapidly updating sequence of range-Doppler image frames of the target. Due to the physics of imaging based on the targets angular motions, however, images are invariably noisy, and not all frames contain equally useful information. The thrust of this research is to cope with these issues by introducing: (i) a multiframe algorithm to stably extract profiling as a basic feature reflecting the entire characteristics of a target; and (ii) subspace analysis for identification of the extracted profiling especially using the recently proposed constrained mutual subspace method (CMSM). Through preliminary experiments we demonstrate the effective performance of the proposed scheme.


eye tracking research & application | 2000

“GazeToTalk”: a nonverbal interface with meta-communication facility (Poster Session)

Tetsuro Chino; Kazuhiro Fukui; Kaoru Suzuki

We propose a new human interface (HI) system named “GazeToTalk” that is implemented by vision based gaze detection, acoustic speech recognition (ASR), and animated human-like agent CG with facial expressions and gestures. The “GazeToTalk” system demonstrates that eye-tracking technologies can be utilized to improve HI effectively by working with other non-verbal messages such as facial expressions and gestures.nConventional voice interface system have the following serious drawbacks. (1) They cannot distinct between input voice and other noise, and (2) cannot understand who is the intended hearer of each utterance. A “push-to-Wk” mechanism can be used to ease these problems, but it spoils the advantages of voice interfaces (e.g. contact-less, suitability in hand-busy situation).nIn real human dialogues, besides exchanging content messages, people use non-verbal messages such as gaze, facial expressions and gestures to establish or maintain conversations, or recover from problems that arise in the conversation.nThe “GazeToTalk” system simulates this kind of “meta-communication” facility by utilizing vision based gaze detection, ASR, and human-like agent CG. When the user intends to input voice commands, he gazes on the agent on the display in order to request to talk, just as in daily human-human dialogues. This gaze is recognized by the gaze detection module and the agent shows a particular facial expression and gestures as a feedback to establish an “eye-contact.” Then the system accepts or rejects speech input from the user depending on the state of the “eye-contact.”nThis mechanism allows the “GazeToTalk” system to accept only intended voice input and ignore another voices and environmental noises successfully, without forcing any arbitrary operation to the user. We also demonstrate an extended mechanism to treat more flexible “eye contact” variations.nThe preliminary experiments suggest that in the context of meta-communication, nonverbal messages can be utilized to improve HI in terms of naturalness, friendliness and tactfulness.

Collaboration


Dive into the Kazuhiro Fukui's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Atsuto Maki

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge