Yumi Iwashita
Kyushu University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yumi Iwashita.
international conference on robotics and automation | 2007
Ryo Kurazume; Kaori Nakamura; Toshiyuki Okada; Yoshinobu Sato; Nobuhiko Sugano; Tsuyoshi Koyama; Yumi Iwashita; Tsutomu Hasegawa
In medical diagnostic imaging, an X-ray CT scanner or a MRI system have been widely used to examine 3D shapes or internal structures of living organisms or bones. However, these apparatuses are generally very expensive and of large size. A prior arrangement is also required before an examination, and thus, it is not suitable for an urgent fracture diagnosis in emergency treatment. This paper proposes a method to estimate a patient-specific 3D shape of a femur from only two fluoroscopic images using a parametric femoral model. Firstly, we develop a parametric femoral model by statistical analysis of a number of 3D femoral shapes created from CT images of 51 patients. Then, the pose and shape parameters of the parametric model are estimated from two 2D fluoroscopic images using a distance map constructed by the level set method. Experiments using synthesized images and fluoroscopic images of a phantom femur are successfully carried out and the usefulness of the proposed method is verified.
international conference on pattern recognition | 2014
Yumi Iwashita; Asamichi Takamine; Ryo Kurazume; M. S. Ryoo
This paper introduces the concept of first-person animal activity recognition, the problem of recognizing activities from a view-point of an animal (e.g., a dog). Similar to first-person activity recognition scenarios where humans wear cameras, our approach estimates activities performed by an animal wearing a camera. This enables monitoring and understanding of natural animal behaviors even when there are no people around them. Its applications include automated logging of animal behaviors for medical/biology experiments, monitoring of pets, and investigation of wildlife patterns. In this paper, we construct a new dataset composed of first-person animal videos obtained by mounting a camera on each of the four pet dogs. Our new dataset consists of 10 activities containing a heavy/fair amount of ego-motion. We implemented multiple baseline approaches to recognize activities from such videos while utilizing multiple types of global/local motion features. Animal ego-actions as well as human-animal interactions are recognized with the baseline approaches, and we discuss experimental results.
intelligent robots and systems | 2008
Ryo Kurazume; Hiroyuki Yamada; Kouji Murakami; Yumi Iwashita; Tsutomu Hasegawa
This paper presents a sensor network system consisting of distributed cameras and laser range finders for multiple objects tracking. Sensory information from cameras is processed by the level set method in real time and integrated with range data obtained by laser range finders in a probabilistic manner using novel SIR/MCMC combined particle filters. Though the conventional SIR particle filter is a popular technique for object tracking, it has been pointed out that the conventional particle filter has some disadvantages in practical applications such as its low tracking performance for multiple targets due to the degeneracy problem. In this paper, the new combined particle filters consisting of a low-resolution MCMC particle filter and a high-resolution SIR particle filter is proposed. Simultaneous tracking experiments for multiple moving targets are successfully carried out and it is verified that the combined particle filters has higher performance than the conventional particle filters in terms of the number of particles, the processing speed, and the tracking performance for multiple targets.
international conference on emerging security technologies | 2010
Yumi Iwashita; Ryosuke Baba; Koichi Ogawara; Ryo Kurazume
This paper presents a spatio-temporal 3D gait database and a view independent person identification method from gait. In case that a target changes ones walking direction compared with that in a database, the correct classification rate is reduced due to the appearance change. To deal with this problem, several methods based on a view transformation model, which converts walking images from one direction to virtual images from different viewpoints, have been proposed. However, the converted image may not coincide the real one, since the target is not included in the training dataset to obtain the transformation model. So we propose a view independent person identification method which creates a database with virtual images synthesized directly from the targets 3D model. In the proposed method, firstly we built a spatio-temporal 3D gait database using multiple cameras, which consists of sequential 3D models of multiple walking people. Then virtual images from multiple arbitrary viewpoints are synthesized from 3D models, and affine moment invariants are derived from virtual images as gait features. In the identification phase, images of a target who walks in an arbitrary direction are taken from one camera, and then gait features are calculated. Finally the person is identified and ones walking direction is estimated. Experiments using the spatio-temporal 3D gait database show the effectiveness of the proposed method.
international conference on robotics and automation | 2009
Yumi Iwashita; Ryo Kurazume
This paper proposes a new person identification method using physiological and behavioral biometrics. Various person recognition systems have been proposed so far, and one of the recently introduced human characteristics for the person identification is gait. Although the shape of ones body has not been considered much as a characteristic, it is closely related to gait and it is difficult to disassociate them. So, the proposed technique introduces a new hybrid biometric, combining body shape (physiological) and gait (behavioral). The new biometric is the full spatio-temporal volume carved by a person who walks. In addition to this biometric, we extract unique biometrics in individuals by the following way: creating the average image from the spatio-temporal volume and forming the new spatio-temporal volume from differential images which are created by subtracting an average image from original images. Affine moment invariants are derived from these biometrics, and classified by a support vector machine. We used the leave-one-out cross validation technique to estimate the correct classification rate of 94 %.
international conference on robotics and automation | 2009
Ryo Kurazume; Yusuke Noda; Yukihiro Tobata; Kai Lingemann; Yumi Iwashita; Tsutomu Hasegawa
In order to construct three-dimensional shape models of large-scale architectural structures using a laser range finder, a number of range images are taken from various viewpoints. These images are aligned using post-processing procedures such as the ICP algorithm. However, in general, before applying the ICP algorithm, these range images must be aligned roughly by a human operator in order to converge to precise positions. The present paper proposes a new modeling system using a group of multiple robots and an on-board laser range finder. Each measurement position is identified by a highly precise positioning technique called Cooperative Positioning System (CPS), which utilizes the characteristics of the multiple-robot system. Thus, the proposed system can construct 3D shapes of large-scale architectural structures without any post-processing procedure or manual registration. ICP is applied optionally for a subsequent refinement of the model. Measurement experiments in unknown and large indoor/outdoor environments are carried out successfully using the newly developed measurement system consisting of three mobile robots named CPS-V. Generating a model of Dazaifu Tenmangu, a famous cultural heritage, for its digital archive completes the paper.
Pattern Recognition Letters | 2012
Yumi Iwashita; Adrian Stoica; Ryo Kurazume
We propose a novel biometrics method based on shadows (shadow biometrics, SB) and introduce a SB-based person identification method for a vision-based surveillance system. Conventional biometric identification based on body movements, as is the case in gait recognition, uses cameras that provide a good view of entire human body. Aerial search and surveillance systems only see the human body from top view with a smaller cross-section and with less details than seen in side views, which is further aggravated by the lower resolution associated with this imagery. Shadows, i.e. body projections due to the Sun, or artificial lights at night, can offer body biometrics information that cannot be directly seen in body top view. In this paper we use SB for person identification, automatically extracting shadows in captured video images, and processing them to extract gait features, further analyzed by spherical harmonics. We demonstrate shadow-based person identification in experiments inside a building using artificial light and outside under the Sun. The introduced method using spherical harmonics outperforms methods based on Fourier transform, gait energy image, and active energy image. Furthermore, we show that the combination of body and shadow areas, as seen from an oblique camera on an upper floor of a building, has better performance than using body only or shadow only information.
Pattern Recognition Letters | 2014
Yumi Iwashita; Koichi Ogawara; Ryo Kurazume
Abstract Conventional methods of gait analysis for person identification use features extracted from a sequence of camera images taken during one or more gait cycles. The walking direction is implicitly assumed not to change. However, with the exception of very particular cases, such as walking on a circle centered on the camera, or along a line passing through the camera, there is always some degree of orientation change, most pronounced when the person is closer to the camera. This change in the angle between the velocity vector and the position vector in respect to the camera causes a decrease in performance for conventional methods. To address this issue we propose in this paper a new method, which provides improved identification in this context of orientation change. The proposed method uses a 4D gait database consisting of multiple 3D shape models of walking people and adaptive virtual image synthesis with high accuracy. Each frame, for the duration of a gait cycle, is used to estimate the walking direction of the subject, and a virtual image corresponding to the estimated direction is synthesized from the 4D gait database. The identification uses affine moment invariants as gait features. The efficiency of the proposed method is demonstrated through experiments using a database that includes 42 subjects.
medical image computing and computer assisted intervention | 2008
Ken'ichi Morooka; Xian Chen; Ryo Kurazume; Seiichi Uchida; Kenji Hara; Yumi Iwashita; Makoto Hashizume
This paper presents a new method for simulating the deformation of organ models by using a neural network. The proposed method is based on the idea proposed by Chen et al. that a deformed model can be estimated from the superposition of basic deformation modes. The neural network finds a relationship between external forces and the models deformed by the forces. The experimental results show that the trained network can achieve a real-time simulation while keeping the acceptable accuracy compared with the nonlinear FEM computation.
digital identity management | 2007
Ryo Kurazume; Yukihiro Tobata; Yumi Iwashita; Tsutomu Hasegawa
In order to construct three dimensional shape models of large scale architectures by a laser range finder, a number of range images are normally taken from various viewpoints and these images are aligned using post-processing procedure such as ICP algorithm. However in general, before applying ICP algorithm, these range images have to be registered to correct positions roughly by a human operator in order to converge to precise positions. In addition, range images must be overlapped sufficiently each other by taking dense images from close viewpoints. On the other hand, if poses of the laser range finder at viewpoints can be identified precisely, local range images can be converted to the world coordinate system directly with simple transformation calculation. This paper proposes a new measurement system for large scale architectures using a group of multiple robots and an on-board laser range finder. Each measurement position is identified by the highly precise positioning technique named cooperative positioning system or CPS which utilizes the characteristics of multiple robots system. The proposed system can construct 3D shapes of large scale architectures without any post-processing procedure such as ICP algorithm and dense range measurements. The measurement experiments in unknown and large indoor/outdoor environments are successfully carried out using the newly developed measurement system consisting of three mobile robots named CPS-V.