Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yutaka Satoh is active.

Publication


Featured researches published by Yutaka Satoh.


Pattern Recognition | 2003

Using selective correlation coefficient for robust image registration

Shun'ichi Kaneko; Yutaka Satoh; Satoru Igarashi

A new method is proposed for robust image registration named Selective Correlation Coefficient in order to search images under ill-conditioned illumination or partial occlusion. A correlation mask-image is generated for selecting pixels of an image before matching. The mask-image can be derived from a binary-coded increment sign-image defined from any object-image and the template. The mask-rate of occluded regions is theoretically expected to be 0.5, while unoccluded regions have much lower rate than 0.5. Robustness for ill-conditioned environment can be realized since inconsistent brightness of occluded regions can be omitted from the mask operation. Furthermore, the mask enhancement procedure is proposed to get more stable robustness. The effectiveness of masking increases by the procedure, resulting in the rate around 0.7 for masking of occluded regions. This paper includes a theoretical modeling and analysis of the proposed method and some experimental results with real images.


Pattern Recognition | 2011

Object detection based on a robust and accurate statistical multi-point-pair model

Xinyue Zhao; Yutaka Satoh; Hidenori Takauji; Shun'ichi Kaneko; Kenji Iwata; Ryushi Ozaki

In this paper, we propose a robust and accurate background model, called grayscale arranging pairs (GAP). The model is based on the statistical reach feature (SRF), which is defined as a set of statistical pair-wise features. Using the GAP model, moving objects are successfully detected under a variety of complex environmental conditions. The main concept of the proposed method is the use of multiple point pairs that exhibit a stable statistical intensity relationship as a background model. The intensity difference between pixels of the pair is much more stable than the intensity of a single pixel, especially in varying environments. Our proposed method focuses more on the history of global spatial correlations between pixels than on the history of any given pixel or local spatial correlations. Furthermore, we clarify how to reduce the GAP modeling time and present experimental results comparing GAP with existing object detection methods, demonstrating that superior object detection with higher precision and recall rates is achieved by GAP.


Eurasip Journal on Image and Video Processing | 2007

An Omnidirectional Stereo Vision-Based Smart Wheelchair

Yutaka Satoh; Katsuhiko Sakaue

To support safe self-movement of the disabled and the aged, we developed an electric wheelchair that realizes the functions of detecting both the potential hazards in a moving environment and the postures and gestures of a user by equipping an electric wheelchair with the stereo omnidirectional system (SOS), which is capable of acquiring omnidirectional color image sequences and range data simultaneously in real time. The first half of this paper introduces the SOS and the basic technology behind it. To use the multicamera system SOS on an electric wheelchair, we developed an image synthesizing method of high speed and high quality and the method of recovering SOS attitude changes by using attitude sensors is also introduced. This method allows the SOS to be used without being affected by the mounting attitude of the SOS. The second half of this paper introduces the prototype electric wheelchair actually manufactured and experiments conducted using the prototype. The usability of the electric wheelchair is also discussed.


ieee region 10 conference | 2005

Robust Background Subtraction based on Bi-polar Radial Reach Correlation

Yutaka Satoh; Katsuhiko Sakaue

Background subtraction algorithms are widely utilized as a technology for segmentation of background and target objects in images. In particular, the simple background subtraction algorithm is used in many systems for its ease and low cost of implementation. However, because this algorithm relies only on the intensity difference, it has various problems, such as low tolerance for poor illumination and shadows and the inability to distinguish objects from their background when their intensities are similar. In an earlier study we proposed a new statistic, known as radial reach correlation (RRC), for distinguishing similar areas and dissimilar areas when comparing background images and target images at the pixel level. And we achieved a robust background subtraction by evaluating the local texture in images. In this study we extended this method further and developed a method to ensure stable background separation even in cases where the image texture is feeble and the intensity distribution is biased.


Pattern Analysis and Applications | 2006

Moving object detection by mobile Stereo Omni-directional System (SOS) using spherical depth image

Sanae Shimizu; Kazuhiko Yamamoto; Caihua Wang; Yutaka Satoh; Hideki Tanahashi; Yoshinori Niwa

Moving object detection with a mobile image sensor is an important task for mobile surveillance systems running in real environments. In this paper, we propose a novel method to effectively solve this problem by using a Stereo Omni-directional System (SOS), which can obtain both color and depth images of the environment in real time with a complete spherical field of view. Taking advantage of the SOS that the frame-out problem never occurs, we develop a method to detect the regions of moving objects stably under arbitrary movement and pose change of the SOS, by using the spherical depth image sequence obtained by the SOS. The method first predicts the depth image for the current time from that obtained at the previous time and the ego-motion of the SOS, and then detects moving objects by comparing the predicted depth image with the actual one obtained at the current time.


asian conference on computer vision | 2014

Extended Co-occurrence HOG with Dense Trajectories for Fine-Grained Activity Recognition

Hirokatsu Kataoka; Kiyoshi Hashimoto; Kenji Iwata; Yutaka Satoh; Nassir Navab; Slobodan Ilic; Yoshimitsu Aoki

In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.


british machine vision conference | 2016

Recognition of Transitional Action for Short-Term Action Prediction using Discriminative Temporal CNN Feature.

Hirokatsu Kataoka; Yudai Miyashita; Masaki Hayashi; Kenji Iwata; Yutaka Satoh

Herein, we address transitional actions class as a class between actions. Transitional actions should be useful for producing short-term action predictions while an action is transitive. However, transitional action recognition is difficult because actions and transitional actions partially overlap each other. To deal with this issue, we propose a subtle motion descriptor (SMD) that identifies the sensitive differences between actions and transitional actions. The two primary contributions in this paper are as follows: (i) defining transitional actions for short-term action predictions that permit earlier predictions than early action recognition, and (ii) utilizing convolutional neural network (CNN) based SMD to present a clear distinction between actions and transitional actions.Using three different datasets, we will show that our proposed approach produces better results than do other state-of-the-art models. The experimental results clearly show the recognition performance effectiveness of our proposed model, as well as its ability to comprehend temporal motion in transitional actions.


Pattern Recognition | 2013

Robust face recognition using the GAP feature

Xinyue Zhao; Zaixing He; Shuyou Zhang; Shun'ichi Kaneko; Yutaka Satoh

In this paper, we propose a novel approach based on Grayscale Arranging Pairs (GAP) for face recognition. A facial model is built by using the stable point pairs of the GAP feature. Then the similarity between the facial model and the input facial image is calculated by checking whether the intensity relationship of these point pairs is the same. Different from current face recognition algorithms, GAP is a robust holistic feature without losing its local property. By using a stable intensity relationship of multiple point pairs, the GAP feature shows a great invariance property of facial features, and exhibits high robustness to resist illumination variations. Meanwhile, it describes the holistic information in the entire facial image, which is more similar to the human recognition mechanism. In addition, a novel weighting model that exploits the local characteristics of faces is applied in the framework, leading to a higher accuracy in face recognition. We compare the proposed method with four other famous methods on the Extended Yale B face and PIE face databases. The experimental results showed that the proposed method provides outstanding results in recognizing faces.


advanced video and signal based surveillance | 2013

Co-occurrence-based adaptive background model for robust object detection

Dong Liang; Shun'ichi Kaneko; Manabu Hashimoto; Kenji Iwatao; Xinyue Zhao; Yutaka Satoh

An illumination-invariant background model for detecting objects in dynamic scenes is proposed. It is robust in the cases of sudden illumination fluctuation as well as burst moving background. Unlike previous works, it distinguishes objects from a dynamic background using co-occurrence character between a target pixel and its supporting pixels in the form of multiple pixel pairs. Experiments used several challenging datasets that proved the robust performance of object detection in various environments.


international conference on knowledge-based and intelligent information and engineering systems | 2003

Event Detection for a Visual Surveillance System Using Stereo Omni-directional System

Hiroki Watanabe; Hideki Tanahashi; Yutaka Satoh; Yoshinori Niwa; Kazuhiko Yamamoto

In this paper, by using a stereo omni-directional system, we propose an automatic surveillance system for detecting the events in which a person enters or leaves a room and of an object appearing or disappearing. It is important for a video surveillance system to detect events automatically and offer effective information. A stereo omni-directional system captures all directional color and disparity images from an observation point simultaneously, in real-time. Using background and frame subtraction methods, our system detects the foreground pixels, analyzes them and estimates the status of human and object regions.

Collaboration


Dive into the Yutaka Satoh's collaboration.

Top Co-Authors

Avatar

Kenji Iwata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katsuhiko Sakaue

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge