Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masatsugu Kidode is active.

Publication


Featured researches published by Masatsugu Kidode.


ubiquitous computing | 2007

Ubiquitous Memories: a memory externalization system using physical objects

Tatsuyuki Kawamura; Tomohiro Fukuhara; Hideaki Takeda; Yasuyuki Kono; Masatsugu Kidode

In this paper we propose an object-triggered human memory augmentation system named “Ubiquitous Memories” that enables a user to directly associate his/her experience data with physical objects by using a “touching” operation. A user conceptually encloses his/her experiences gathered through sense organs into physical objects by simply touching an object. The user can also disclose and re-experience for himself/herself the experiences accumulated in an object by the same operation. We implemented a prototype system composed basically of a radio frequency identification (RFID) device. Physical objects are also attached to RFID tags. We conducted two experiments. The first experiment confirms a succession of the “encoding specificity principle,” which is well known in the research field of psychology, to the Ubiquitous Memories system. The second experiment aims at a clarification of the system’s characteristics by comparing the system with other memory externalization strategies. The results show the Ubiquitous Memories system is effective for supporting memorization and recollection of contextual events.


computational intelligence in robotics and automation | 2003

Robot navigation in corridor environments using a sketch floor map

Vachirasuk Setalaphruk; Atsushi Ueno; Izuru Kume; Yasuyuki Kono; Masatsugu Kidode

This paper presents a new robot navigation system that can operate on a sketch floor map provided by a user. This sketch map is similar to floor plans as shown at the entrance of buildings, which does not contain accurate metric information and details such as obstacles. The system enables a user to give navigational instructions to a robot by interactively providing a floor map and pointing out goal positions on the map. Since metric information is unavailable, navigation is done using an augmented topological map which described the structure of the corridors extracted from a given floor map. Multiple hypotheses of the robots location are maintained and updated during navigation in order to cope with sensor aliasing and landmark-matching failures due to factors such as unknown obstacles inside the corridors.


international conference on computer vision | 2009

Complex volume and pose tracking with probabilistic dynamical models and visual hull constraints

Norimichi Ukita; Michiro Hirai; Masatsugu Kidode

We propose a method for estimating the pose of a human body using its approximate 3D volume (visual hull) obtained in real time from synchronized videos. Our method can cope with loose-fitting clothing, which hides the human body and produces non-rigid motions and critical reconstruction errors, as well as tight-fitting clothing. To follow the shape variations robustly against erratic motions and the ambiguity between a reconstructed body shape and its pose, the probabilistic dynamical model of human volumes is learned from training temporal volumes refined by error correction. The dynamical model of a body pose (joint angles) is also learned with its corresponding volume. By comparing the volume model with an input visual hull and regressing its pose from the pose model, pose estimation can be realized. In our method, this is improved by double volume comparison: 1) comparison in a low-dimensional latent space with probabilistic volume models and 2) comparison in an observation volume space using geometric constrains between a real volume and a visual hull. Comparative experiments demonstrate the effectiveness of our method faster than existing methods.


intelligent user interfaces | 2004

Wearable virtual tablet: fingertip drawing on a portable plane-object using an active-infrared camera

Norimichi Ukita; Masatsugu Kidode

We propose the Wearable Virtual Tablet (WVT), where a user can draw a locus on a common object with a plane surface (e.g., a notebook and a magazine) with a fingertip. Our previous WVT[1], however, could not work on a plane surface with complicated texture patterns: Since our WVT employs an active-infrared camera and the reflected infrared rays vary depending on patterns on a plane surface, it is difficult to estimate the motions of a fingertip and a plane surface from an observed infrared-image. In this paper, we propose a method to detect and track their motions without interference from colored patterns on a plane surface. (1) To find the region of a plane object in the observed image, four edge lines that compose a rectangular object can be easily extracted by employing the properties of an active-infrared camera. (2) To precisely determine the position of a fingertip, we utilize a simple finger model that corresponds to a finger edge independent of its posture. (3) The system can distinguish whether or not a fingertip touches a plane object by analyzing image intensities in the edge region of the fingertip.


international conference on pattern recognition | 2006

3D Scene Reconstruction from Reflection Images in a Spherical Mirror

Masayuki Kanbara; Norimichi Ukita; Masatsugu Kidode; Naokazu Yokoya

This paper proposes a method for reconstructing a 3D scene structure by using the images reflected in a spherical mirror. In our method, the mirror is moved freely within the field of view of a camera in order to observe a surrounding scene virtually from multiple viewpoints. The observation scheme, therefore, allows us to obtain the wide-angle multi-viewpoint images of a wide area. In addition, the following characteristics of this observation enable multi-view stereo with simple calibration of the geometric configuration between the mirror and the camera; (1) the distance and direction from the camera to the mirror can be estimated directly from the position and size of the mirror in the captured image and (2) the directions of detected points from each position of the moving mirror can be also estimated based on reflection on a spherical surface. Some experimental results show the effectiveness of our 3D reconstruction method


international conference on computer graphics and interactive techniques | 2007

GPU-based shape from silhouettes

Sofiane Yous; Hamid Laga; Masatsugu Kidode; Kunihiro Chihara

In this paper, we present a new method for surface-based shape reconstruction from a set of silhouette images. We propose to project the viewing cones from all viewpoints to the 3D space and compute the intersections that represent the vertices of the Visual Hull (VH). We propose a method for fast traversal of the layers of the projected cones and retrieve the viewing edges that lie to the surface of the VH. Taking advantage of the power of Graphics Processing Units (GPU), the proposed method achieves a real-time full reconstruction of VH rather than rendering a novel view of the VH. The experiments on several data sets, including real data, demonstrate the efficiency of the method for real-time visual hull reconstruction.


international conference on image analysis and processing | 2003

Estimation of 3D gazed position using view lines

Ikuhisa Mitsugami; Norimichi Ukita; Masatsugu Kidode

We propose a new wearable system that can estimate the 3D position of a gazed point by measuring multiple binocular view lines. In principle, 3D measurement is possible by the triangulation of binocular view lines. However, it is difficult to measure these lines accurately with a device for eye tracking, because of errors caused by (1) difficulty in calibrating the device and (2) the limitation that a human cannot gaze very accurately at a distant point. Concerning (1), the accuracy of calibration can be improved by considering the optical properties of a camera in the device. To solve (2), we propose a stochastic algorithm that determines a gazed 3D position by integrating information of view lines observed at multiple head positions. We validated the effectiveness of the proposed algorithm experimentally.


human-computer interaction with mobile devices and services | 2003

I’m Here!: A Wearable Object Remembrance Support System

Takahiro Ueoka; Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode

In this paper we propose a wearable vision interface system named “I’m Here!” to support a user’s remembrance of object location in everyday life. The system enables users to retrieve certain information from a video database that has recorded a set of the latest scenes of target objects which were held by the user and were observed from the users’ viewpoint. We propose the object recognition method to associate the video database with the name of objects observed in the video. The offline experiments demonstrate that the system is useful enough to recognize the objects.


Advanced Robotics | 2013

Reinforcement learning of a motor skill for wearing a T-shirt using topology coordinates

Takamitsu Matsubara; Daisuke Shinohara; Masatsugu Kidode

This article focuses on learning motor skills for anthropomorphic robots that must interact with non-rigid materials to perform such tasks as wearing clothes, turning socks inside out, and applying bandages. We propose a novel reinforcement learning framework for learning motor skills that interact with non-rigid materials. Our learning framework focuses on the topological relationship between the configuration of the robot and the non-rigid material based on the consideration that most details of the material (e.g. wrinkles) are not important for performing the motor tasks. This focus allows us to define the task performance and provide reward signals based on a low-dimensional variable, i.e. topology coordinates, in a real environment using reliable sensors. We constructed an experimental setting with an anthropomorphic dual-arm robot and a tailor-made T-shirt for it. To demonstrate the feasibility of our framework, a robot performed a T-shirt wearing task, whose goal was to put both of its arms into the corresponding sleeves of the T-shirt. The robot acquired sequential movements that put both of its arms into the T-shirt.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

A simple and robust method to screen cataracts using specular reflection appearance

Retno Supriyanti; Hitoshi Habe; Masatsugu Kidode; Satoru Nagata

The high prevalence of cataracts is still a serious public health problem as a leading cause of blindness, especially in developing countries with limited health facilities. In this paper we propose a new screening method for cataract diagnosis by easy-to-use and low cost imaging equipment such as commercially available digital cameras. The difficulties in using this sort of digital camera equipment are seen in the observed images, the quality of which is not sufficiently controlled; there is no control of illumination, for example. A sign of cataracts is a whitish color in the pupil which usually is black, but it is difficult to automatically analyze color information under uncontrolled illumination conditions. To cope with this problem, we analyze specular reflection in the pupil region. When an illumination light hits the pupil, it makes a specular reflection on the frontal surface of the lens of the pupil area. Also the light goes through the rear side of the lens and might be reflected again. Specular reflection always appears brighter than the surrounding area and is also independent of the illumination condition, so this characteristic enables us to screen out serious cataract robustly by analyzing reflections observed in the eye image. In this paper, we demonstrate the validity of our method through theoretical discussion and experimental results. By following the simple guidelines shown in this paper, anyone would be able to screen for cataracts.

Collaboration


Dive into the Masatsugu Kidode's collaboration.

Top Co-Authors

Avatar

Yasuyuki Kono

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Norimichi Ukita

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tatsuyuki Kawamura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hitoshi Habe

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takahiro Ueoka

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ikuhisa Mitsugami

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sofiane Yous

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Akihiro Kobayashi

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hideaki Takeda

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Takamitsu Matsubara

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge