Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katsuhiko Sakaue is active.

Publication


Featured researches published by Katsuhiko Sakaue.


international conference on pattern recognition | 1996

Registration and integration of multiple range images for 3-D model construction

Takeshi Masuda; Katsuhiko Sakaue; Naokazu Yokoya

Registration and integration of measured data of real objects are becoming important in 3D modeling for computer graphics and computer-aided design. We propose a new algorithm of registration and integration of multiple range images for producing a geometric object surface model. The registration algorithm determines a set of rigid motion parameters that register a range image to a given mesh-based geometric model. The algorithm is an integration of the iterative closest point algorithm with the least median of squares estimator. After registration, points in the input range are classified into inliers and outliers according to registration errors between each data point and the model. The outliers are appended to the surface model to be used by registration with the following range images. The parts classified as inlier by at least one registration result is segmented out to be integrated. This process consisting of registration and integration is iterated until all views are integrated. We successfully experimented with the proposed method on real range image sequences taken by a rangefinder. The method does not need any preliminary processes.


international conference on pattern recognition | 2004

Head pose estimation by nonlinear manifold learning

Bisser Raytchev; Ikushi Yoda; Katsuhiko Sakaue

In This work we propose an isomap-based nonlinear alternative to the linear subspace method for manifold representation of view-varying faces. Being interested in user-independent head pose estimation, we extend the isomap model (J.B. Tenenbaum et al., 2000) to be able to map (high-dimensional) input data points which are not in the training data set into the dimensionality-reduced space found by the model. From this representation, a pose parameter map relating the input face samples to view angles is learnt. The proposed method is evaluated on a large database of multi-view face images in comparison to two other recently proposed subspace methods.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1983

Design and implementation of SPIDER—A transportable image processing software package

Hideyuki Tamura; Shigeyuki Sakane; Fumiaki Tomita; Naokazu Yokoya; Masahide Kaneko; Katsuhiko Sakaue

Abstract SPIDER is a general-purpose image processing software package which consists of over 400 FORTRAN IV subroutines for various image processing algorithms and several utility programs for managing them. The package was developed for the benefit of extensive interchange and accumulation of programs among research groups. Thus, high transportability of software is emphasized above all in its design concept. In effect, all the image processing subroutines are implemented to be completely free of I/O work such as file access or driving peripheral image devices. The specifications of SPIDER programs also regulate the style of comments in source programs and documentation for the users manual. SPIDER may also be very useful as a research tool in other scientific disciplines as well as integrating fundamental algorithms in the image processing community. The design concepts, specifications, and contents of SPIDER are described.


international conference on computer vision | 2001

The Hand Mouse: GMM hand-color classification and mean shift tracking

Takeshi Kurata; Takashi Okuma; Masakatsu Kourogi; Katsuhiko Sakaue

This paper describes an algorithm to detect and track a hand in each image taken by a wearable camera. We primarily use color information, however, instead of pre-defined skin-color models, we dynamically construct hand- and background-color models by using a Gaussian mixture model (GMM) to approximate the color histogram. Not only to obtain the estimated mean of hand color necessary for the restricted EM algorithm that estimates the GMM but also to classify hand pixels based on the Bayes decision theory, we use a spatial probability distribution of hand pixels. Since the static distribution is inadequate for the hand-tracking stage, we translate the distribution with the hand motion based on the mean shift algorithm. Using the proposed method, we implemented the Hand Mouse that uses the wearers hand as a pointing device, on our wearable vision system.


international symposium on wearable computers | 2001

A panorama-based method of personal positioning and orientation and its real-time applications for wearable computers

Masakatsu Kourogi; Takeshi Kurata; Katsuhiko Sakaue

In this paper, we describe an improved method of personal positioning and orientation using image registration between input video frames and panoramic images captured beforehand. In our previous work, we proposed the method of image registration with an affine transform. However, the affine transform is generally not capable of image registration between a frame and a panorama. We improved the previous method so that it can estimate projective transform parameters without severely increasing computational cost. We also improved the method to be robust with respect to lighting changes by using the weighted sum of absolute difference of both brightness and its gradient between images. We confirmed that this improved method could estimate image registration parameters under conditions that hindered the previous method. Its computational cost increased by only 10-20% and its software implementation was capable of real-time processing.


Eurasip Journal on Image and Video Processing | 2007

An Omnidirectional Stereo Vision-Based Smart Wheelchair

Yutaka Satoh; Katsuhiko Sakaue

To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS), which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time. The first half of this paper introduces the SOS and the basic technology behind it. To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced. This method allows the SOS to be used without being affected by the mounting attitude of the SOS. The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype. The usability of the electric wheelchair is also discussed.


international conference on pattern recognition | 2000

Real-time camera parameter estimation from images for a mixed reality system

Takashi Okuma; Katsuhiko Sakaue; Haruo Takemura; Naokazu Yokoya

This paper describes a method of estimating the position and orientation of a camera for constructing a mixed reality (MR) system. In an MR system 3D virtual objects should be merged into a 3D real environment at a right position in real time. To acquire the users viewing position and orientation is the main technical problem of constructing an MR system. The users viewpoint can be determined by estimating the position and orientation of a camera using images taken at the viewpoint. Our method estimates the camera pose using screen coordinates of captured color fiducial markers whose 3D positions are known. The method consists of three algorithms for perspective n-points problems and rises each algorithm selectively. The method also estimates the screen coordinates of untracked markers that are occluded or are out of the view. It has been found that an experimental MR system that is based on the proposed method can seamlessly merge 3D virtual objects into a 3D real environment at right position in real-time and allows users to look around an area in which numbers are placed.


international workshop on advanced motion control | 2004

A natural feature-based 3D object tracking method for wearable augmented reality

Takashi Okuma; Takeshi Kurata; Katsuhiko Sakaue

In this paper, we describe a novel natural feature based 3-D object tracking method. Our method determines geometric relation between known 3-D objects and a camera, not using fiducial markers. Since our method only uses a camera to determine this geometric relation, it is suitable for wearable augmented reality (AR) systems. Our method combines two different types of approaches for tracking: a bottom up approach (BUA) and a top down approach (TDA). We mainly use a BUA, because it acquires accurate results with small calculation cost. When BUA cannot output an accurate result, our method starts TDA to avoid mistracking. An experimental result shows an accuracy and integrity of our method.


ieee region 10 conference | 2005

Robust Background Subtraction based on Bi-polar Radial Reach Correlation

Yutaka Satoh; Katsuhiko Sakaue

Background subtraction algorithms are widely utilized as a technology for segmentation of background and target objects in images. In particular, the simple background subtraction algorithm is used in many systems for its ease and low cost of implementation. However, because this algorithm relies only on the intensity difference, it has various problems, such as low tolerance for poor illumination and shadows and the inability to distinguish objects from their background when their intensities are similar. In an earlier study we proposed a new statistic, known as radial reach correlation (RRC), for distinguishing similar areas and dissimilar areas when comparing background images and target images at the pixel level. And we achieved a robust background subtraction by evaluating the local texture in images. In this study we extended this method further and developed a method to ensure stable background separation even in cases where the image texture is feeble and the intensity distribution is biased.


international conference on multimodal interfaces | 2000

A Vision-Based Method for Recognizing Non-manual Information in Japanese Sign Language

Ming Xu; Bisser Raytchev; Katsuhiko Sakaue; Osamu Hasegawa; Atsuko Koizumi; Hirohiko Sagawa

This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time.

Collaboration


Dive into the Katsuhiko Sakaue's collaboration.

Top Co-Authors

Avatar

Ikushi Yoda

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takeshi Kurata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yutaka Satoh

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kenji Iwata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bisser Raytchev

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masakatsu Kourogi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge