Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomio Echigo is active.

Publication


Featured researches published by Tomio Echigo.


european conference on computer vision | 2006

Gait recognition using a view transformation model in the frequency domain

Yasushi Makihara; Ryusuke Sagawa; Yasuhiro Mukaigawa; Tomio Echigo; Yasushi Yagi

Gait analyses have recently gained attention as methods of identification of individuals at a distance from a camera. However, appearance changes due to view direction changes cause difficulties for gait recognition systems. Here, we propose a method of gait recognition from various view directions using frequency-domain features and a view transformation model. We first construct a spatio-temporal silhouette volume of a walking person and then extract frequency-domain features of the volume by Fourier analysis based on gait periodicity. Next, our view transformation model is obtained with a training set of multiple persons from multiple view directions. In a recognition phase, the model transforms gallery features into the same view direction as that of an input feature, and so the features match each other. Experiments involving gait recognition from 24 view directions demonstrate the effectiveness of the proposed method.


machine vision applications | 1989

A camera calibration technique using sets of parallel lines

Tomio Echigo

This paper presents a new method for three-dimensional camera calibration in which the rotation parameters are decoupled from the translation parameters. First, the rotation parameters are obtained by projecting three sets of parallel lines independently of the translation parameters and the imaging distance from the lens to the image plane. The virtual line passing through the image center, which is calculated by perspective projection of a set of parallel lines, depends only on the rotation parameters. Next, the translation parameters and the imaging distance are analytically obtained. Experimental results are used to show how the camera model can be accurately reconstructed in an easily prepared environment.


international conference on image processing | 2002

Learning personalized video highlights from detailed MPEG-7 metadata

Alejandro Jaimes; Tomio Echigo; Masayoshi Teraguchi; Fumiko Satoh

We present a new framework for generating personalized video digests from detailed event metadata. In the new approach high level semantic features (e.g., number of offensive events) are extracted from an existing metadata signal using time windows (e.g., features within 16 sec. intervals). Personalized video digests are generated using a supervised learning algorithm which takes as input examples of important/unimportant events. Window-based features are extracted from the metadata and used to train the system and build a classifier that, given metadata for a new video, classifies segments into important and unimportant, according to a specific user, to generate personalized video digests. Our experimental results using soccer video suggest that extracting high level semantic information from existing metadata can be used effectively (80% precision and 85% recall using cross validation) in generating personalized video digests.


international conference on pattern recognition | 2006

Adaptive Control of Video Display for Diagnostic Assistance by Analysis of Capsule Endoscopic Images

Vu Hai; Tomio Echigo; Ryusuke Sagawa; Keiko Yagi; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa; Yasushi Yagi

In this paper, we present a method for reducing diagnostic time by adaptively controlling the frame rate in a capsule endoscopic image sequence. The video sequence, which was capture over 8 hours, requires from 45 minutes to two hours of extreme concentration by examining doctors to make diagnosis. Effectiveness of the method is that the sequence can be played at high speed in stable regions to save time and then decreased at rough changes that can then help ascertain suspicious findings more conveniently. To realize such a system, the capturing conditions are classified into groups corresponding to the changing states between two frames. The delay time of these frames was calculated by the parametric functions. The optimal parameter set was determined from evaluations by medical doctors. We concluded that the average diagnostic time could be reduced from 8 hours down to about 30 minutes


intelligent robots and systems | 2005

Calibration of lens distortion by structured-light scanning

Ryusuke Sagawa; Masaya Takatsuji; Tomio Echigo; Yasushi Yagi

This paper describes a new method to automatically calibrate lens distortion of wide-angle lenses. We project structured-light patterns using a flat display to generate a map between the display and the image coordinate systems. This approach has two advantages. First, it is easier to take correspondences of image and marker (display) coordinates around the edge of a camera image than using a usual marker, e.g. a checker board. Second, since we can easily construct a dense map, a simple linear interpolation is enough to create an undistorted image. Our method is not restricted by the distortion parameters because it directly generates the map. We have evaluated the accuracy of our method and the error becomes smaller than results by parameter fitting.


international conference on robotics and automation | 2005

Stereovision with a Single Camera and Multiple Mirrors

El Mustapha Mouaddib; Ryusuke Sagawa; Tomio Echigo; Yasushi Yagi

You can create catadioptric omnidirectional stereovision using several mirrors with a single camera. These systems have interesting advantages, for instance in the case of mobile robot navigation and environment reconstruction. Our paper aims at estimating the” quality” of such stereovision system. What happens when the number of mirrors increases? Is it better to increase the base-line or to increase the number of mirrors? We propose some criteria and a methodology to compare different significant categories (seven): three already existing systems and four new designs that we propose. We also study and propose a global comparison between the best configurations.


international conference on biometrics | 2016

GEINet: View-invariant gait recognition using a convolutional neural network

Kohei Shiraga; Yasushi Makihara; Daigo Muramatsu; Tomio Echigo; Yasushi Yagi

This paper proposes a method of gait recognition using a convolutional neural network (CNN). Inspired by the great successes of CNNs in image recognition tasks, we feed in the most prevalent image-based gait representation, that is, the gait energy image (GEI), as an input to a CNN designed for gait recognition called GEINet. More specifically, GEINet is composed of two sequential triplets of convolution, pooling, and normalization layers, and two subsequent fully connected layers, which output a set of similarities to individual training subjects. We conducted experiments to demonstrate the effectiveness of the proposed method in terms of cross-view gait recognition in both cooperative and uncooperative settings using the OU-ISIR large population dataset. As a result, we confirmed that the proposed method significantly outperformed state-of-the-art approaches, in particular in verification scenarios.


international conference on computer vision | 2007

High Dynamic Range Camera using Reflective Liquid Crystal

Hidetoshi Mannami; Ryusuke Sagawa; Yasuhiro Mukaigawa; Tomio Echigo; Yasushi Yagi

High dynamic range images (HDRIs) are needed for capturing scenes that include drastic lighting changes. This paper presents a method to improve the dynamic range of a camera by using a reflective liquid crystal. The system consists of a camera and a reflective liquid crystal placed in front of the camera. By controlling the attenuation rate of the liquid crystal, the scene radiance for each pixel is adaptively controlled. After the control, the original scene radiance is derived from the attenuation rate of the liquid crystal and the radiance obtained by the camera. A prototype system has been developed and tested for a scene that includes drastic lighting changes. The radiance of each pixel was independently controlled and the HDRIs were obtained by calculating the original scene radiance from these results.


international conference on image processing | 2000

Video summarization using reinforcement learning in eigenspace

Ken Masumitsu; Tomio Echigo

We propose video summarization using reinforcement learning. The importance score of each frame in a video is calculated from the users actions in handling similar previous frames; if such frames were watched rather than skipped, a high score is assigned. To calculate the score, instead of using raw feature vectors extracted from images, we use feature vectors projected on eigenspace: as a result, we can deal with the features comprehensively. We also give an algorithm that uses the reinforcement learning method to create a personalized video summary. The summarization algorithm is applied to a soccer video to confirm its effectiveness.


medical image computing and computer assisted intervention | 2007

Contraction detection in small bowel from an image sequence of wireless capsule endoscopy

Hai Vu; Tomio Echigo; Ryusuke Sagawa; Keiko Yagi; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa; Yasushi Yagi

This paper describes a method for automatic detection of contractions in the small bowel through analyzing Wireless Capsule Endoscopic images. Based on the characteristics of contraction images, a coherent procedure that includes analyzes of the temporal and spatial features is proposed. For temporal features, the image sequence is examined to detect candidate contractions through the changing number of edges and an evaluation of similarities between the frames of each possible contraction to eliminate cases of low probability. For spatial features, descriptions of the directions at the edge pixels are used to determine contractions utilizing a classification method. The experimental results show the effectiveness of our method that can detect a total of 83% of cases. Thus, this is a feasible method for developing tools to assist in diagnostic procedures in the small bowel.

Collaboration


Dive into the Tomio Echigo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Sagawa

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hai Vu

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Keiko Yagi

Kobe Pharmaceutical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuhiro Mukaigawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge