Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masamitsu Kurisu is active.

Publication


Featured researches published by Masamitsu Kurisu.


intelligent robots and systems | 2003

Constructing a 3-D map of rubble by teleoperated mobile robots with a motion canceling camera system

Yasuyoshi Yokokohji; Masamitsu Kurisu; S. Takao; Y. Kudo; K. Hayashi; Tsuneo Yoshikawa

In order to search and rescue the victims in rubble effectively, a 3-D map of the rubble is required. As a part of the national project of rescue robot system, we are investigating a method for constructing a 3-D map of rubble by teleoperated mobile robots. We are also planning to build an intuitive user interface for teleoperating robots and navigating in a virtualized rubble model using the obtained 3-D model. In this paper, some preliminary research results are introduced. We did some design studies of laser range finders that can be mounted on a mobile robot and can get the range data of the rubble around the robot. Then, we formulated a 3D SLAM (Simultaneous Localization and Map Building) algorithm and conducted some simulation studies. Lastly, we proposed a novel motion canceling camera system and confirmed its validity by experiment.


intelligent robots and systems | 2007

Calibration of laser range finder with a genetic algorithm

Masamitsu Kurisu; Hiroki Muroi; Yasuyoshi Yokokohji

In order to search and rescue the victims in rubble effectively, a 3D map of the rubble is required. As a part of the national project of rescue robot system, we are investigating a method for constructing a 3D map of rubble by teleoperated mobile robots. In previous work, we have developed the second prototype of laser range finder for rubble measurement. The prototype laser range finder consists of a ring laser beam module and two omni-vision cameras. Also, the third prototype, which is more compact than the second one, is newly developed. Our range finder has the features that the omni-directional distance around the range finder can be measured at once and the omni-directional picture image can be captured at the same time. The measurement accuracy of the sensor depends on the adjustment of some parameters. Although we were adjusting the parameters manually so far, this adjustment requires much time and effort. However, it is difficult for manual adjustment to derive fine measurement accuracy. Then, we developed a calibration system which adjusts the parameters using a Genetic Algorithm. In this paper, the third prototype laser range finder and the calibration system are described. The calibration method of the parameters for the laser range finder is also described. The calibration system makes it possible to adjust the parameters easily. Results of calibration by using the constructed calibration system show the availability of the system.


international conference on mechatronics and automation | 2005

Development of a laser range finder for 3D map-building in rubble

Masamitsu Kurisu; Yasuyoshi Yokokohji; Y. Oosato

In order to search and rescue the victims in rubble effectively, a 3D map of the rubble is required. As a part of the national project of rescue robot system, we are investigating a method for constructing a 3D map of rubble by teleoperated mobile robots. In previous work, we have developed the first prototype of laser range finder for rubble measurement. The developed one consists of a ring laser beam module and an omnivision camera. However, the first prototype has the weakness in which the measurement can be performed only in a dark place, because it is difficult to extract the reflected laser image from the picture captured in a light place. On the contrary, the pictures, in which the contrast of brightness is clarified, are required in order to extract the environmental features which are utilized for the position estimation of the robot based on the SLAM frame work. Also, the texture images captured in a light place are required for constructing virtualized rubble. In this paper, the second prototype of laser range finder which we developed in order to solve above dilemma is described. Although the concept of this prototype is similar to the first one, it has an infrared laser module and two cameras for capturing the reflected laser image and texture image separately. And, a hot mirror is used to dissociate the infrared laser ray from visible rays. We evaluate the measurement accuracy of the second prototype based on the measurement error analysis described in previous work. Using the second laser range finder, 3D maps of rubble were actually built with reasonable accuracy.


intelligent robots and systems | 2003

Development of a standard robotic dummy for the purpose of evaluating rescue equipment and skill

Yasuhiro Masutani; Koichi Osuka; Masamitsu Kurisu; Tomoharu Doi; Tadahiro Kaneda; Xin-Zhi Zheng; Hiroshi Sugimoto; Teruaki Azuma

Concepts and necessity of a dummy (human model) which can simulate a disaster victim for the purpose of evaluating rescue equipments including robots and rescue skills of human and robots are proposed. By contrast of conventional dummies in various fields, the concrete requirements are made clear. The plan of research and development project based on the concept is described. In the latter part of this paper, the 1st prototype under development is introduced. It is based on a human model (doll) for training of bathing assistance, and is equipped with 8-channel telemetric sensors, a wireless camera, a voice recorder, a heater vest, and a carbon dioxide gas tank.


international conference on mechatronics and automation | 2007

Development of a Laser Range Finder for 3D Map-Building in Rubble; Installation in a Rescue Robot

Masamitsu Kurisu; Hiroki Muroi; Yasuyoshi Yokokohji; Hiroyuki Kuwahara

In order to search and rescue the victims in rubble effectively, a 3D map of the rubble is required. As a part of the national project of rescue robot system, we are investigating a method for constructing a 3D map of rubble by teleoperated mobile robots. In this paper, we developed a laser range finder for 3D map-building in rubble. The developed range finder consists of a ring laser beam module and an omnivision camera. Ring laser beam is generated by using a conical mirror and it is radiated toward interior wall of the rubble around a mobile robot that mounts the laser range finder. The omnivision camera with hyperbolic mirror can capture the reflected image of the ring laser on the rubble. Based on the triangulation principle, a cross section range data is obtained. Continuing this measurement as the mobile robot moves inside the rubble, a 3D map is obtained. We constructed a geometric model of the laser range finder for error analysis and obtained an optimal dimension of the laser range finder. Based on this analysis, we actually prototyped a range finder. Experimental results show that the actual measurement errors are well matched to the theoretical values. Using the prototyped laser range finder, a 3D map of rubble was actually built with reasonable accuracy.


Advanced Robotics | 2005

Development of a laser range finder for three-dimensional map building in rubble

Masamitsu Kurisu; Yasuyoshi Yokokohji; Yusuke Shiokawa; Takayuki Samejima

In order to search and rescue victims in rubble effectively, a three-dimensional (3D) map of the rubble is required. As a part of the national project on rescue robot systems, we are investigating a method for constructing a 3D map of rubble by teleoperated mobile robots. In this paper, we developed a laser range finder for 3D map building in rubble. The developed range finder consists of a ring laser beam module and an omnivison camera. The ring laser beam is generated by using a conical mirror and it is radiated toward the interior wall of the rubble around a mobile robot on which the laser range finder is mounted. The ominivison camera with a hyperbolic mirror can capture the reflected image of the ring laser on the rubble. Based on the triangulation principle, cross-section range data is obtained. Continuing this measurement as the mobile robot moves inside the rubble, a 3D map is obtained. We constructed a geometric model of the laser range finder for error analysis and obtained an optimal dimension of the laser range finder. Based on this analysis, we actually prototyped a range finder. Experimental results show that the actual measurement errors are well matched to the theoretical values. Using the prototyped laser range finder, a 3D map of rubble was actually built with reasonable accuracy.


23rd International Symposium on Automation and Robotics in Construction | 2006

Tracing Control for a Tracked Vehicle Based on a Virtual Wheeled Mobile Robot

Masamitsu Kurisu; Kazutaka Takahashi; Toyohiro Konishi; Shigeru Sarata

Although a tracked vehicle enables stable movements on rough terrain, in a rotating motion no fixed kinematic model such as the model of wheeled mobile robot exists. Hence, it is difficult to realize a autonomous driving control of the tracked vehicle. In this paper, a new tracking control method for a tracked vehicle is described. The proposed method is constructed by using a virtual wheeled mobile robot. A instantaneous motion of the vehicle with slip can be regarded as a motion of a mobile robot with independent driving wheels. From the kinematic constraints, the virtual mobile robot model is derived. The virtual desired trajectory for the vir- tual robot is obtained from the given desired trajectory for the original tracked vehicle under the assumption that the virtual model is fixed. A control rule is derived by applying a differential feedback control method for wheeled mobile robots, and guarantees that the virtual mobile robot follows the virtual desired trajectory. We also describe a on-line prediction method for the virtual robot so as to discard the assumption that the virtual model is fixed. This paper presents a new control method for a tracked vehicle to track a given trajectory. the proposed method is based on the control rule for a virtual wheeled mobile robot. The paper is organized as follows: In the next section the kinematic model of the vehicle with slip is discussed. The instantaneous motion of the vehicle with slippage can be regarded as a motion of a mobile robot with independent driving wheels. The virtual wheeled mobile robot is derived from the vehicle motion as the kinematic model. In the third section, the control method for the tracked vehicle with slip is proposed. Kinematic restrictions clarify that the vehicle with slippage can not no longer track the given trajectory strictly. The desired posture of the vehicle is slightly al- tered so that the position of the vehicles origin can follow the trajectory. Then, the virtual desired trajectory for the virtual robot is derived from the altered trajectory under the assumption that the virtual model is fixed. Finally, a control rule for the tracked vehicle is derivedby applyinga differen- tial feedback control method for wheeled mobile robots. the control method guarantees that the virtual mobile robot fol- lows the virtual desired trajectory. In the fourth section, the prediction method of the virtual robot is described so as to discard the assumption mentioned in previous section. The parameters concerning the virtual mobile robot are the po- sition of its origin with respect to the original vehicle, and the tread length between two virtual wheels. These param- eters will change if the property of ground or the contact condition between crawler and ground changes. However, when the motion of the vehicle shifts to clockwise rotation from counterclockwise rotation, or the reverse motion oc- curs, the parameters cause the discontinuity on representa- tion. To avoid this, the parameters are projected on a unit sphere and converted to the polar-coordinate representation. The converted parameters have the continuity and linearity. The parameters are predicted by using a linear time series model. The control rule mentioned in the section 3 is ap- plied on the predicted parameters. Final section makes con- clusions and presents the future work.


international conference on control, automation and systems | 2014

Study on Online 3D Environment Construction for Teleoperation

Masamitsu Kurisu; Ryo Kodama

On remote operation of a robot, it is important to provide the situation around the robot for the operator intelligibly and quickly. This paper presents an online 3D environment map construction by using a robot operated remotely. The environment data is acquired with only a stereo camera mounted on the robot. The 3D environment map is constructed incrementally from the data by online processing. Specifically, the map is provided by polygon-meshes with color, which are generated from environment point clouds. As for online construction of the map, the system has to execute divided process modules in parallel, in order to decentralize processing loads. This paper explains the system architecture to execute the process modules in parallel, and the details of the hardware in which the constructed system is implemented. Implementation of the system to a small mobile robot is also described. Feasibility of our system will be discussed on experimental results of 3D map construction in an imitated environment.


international conference on mechatronics and automation | 2011

A study on teleoperation system for a hexapod robot — Development of a prototype platform

Masamitsu Kurisu

This paper presents a prototype platform of teleoperation system for a hexapod robot, which is developed to investigait operation manners and display information in carrying out various tasks. The hexapod robot has 6 equal limbs with 4 DOF, and each limb has both functions of leg and arm. In case that such a multi-legged robot is operated by remote control, two or more operation manners are required, because the function of the leg needs to be changed according to tasks. Since the configuration of the robot differs from the ones of an operator, the information provided on display device is also significant in order that the operator grasps the situation of robot and the operation manner intuitively. The operation system is constructed in consideration of three target operation, walk around, lift an object using two legs and walk with handling an object. Operation manners implemented for three operations are introduced. Control methods for the walking mode and the lifting mode are also described. In the lifting mode, the posture of the robot is controlled so as not to overturn, when the robot lifts a box using two legs according to commands from an operator. Simple experiments show the effectiveness of the control method.


international conference on control automation and systems | 2015

Position determination of a popup menu on operation screens of a teleoperation system using a low cost head tracker

Ryosuke Sugai; Masamitsu Kurisu

We develop a multi-purpose cockpit system which enables an operator to control various robots remotely. The system consists of multiple screens, a multi-purpose haptic device, and a touch screen interface. The screens provide the states of robot, the image captured by a camera on the robot, and a virtual 3D environment constructed by using the robot controlled remotely. The multi-purpose haptic is a control stick which enables the operator to control the robot with various operation manners by deforming its configuration. Although the operation manner can be changed by using the touch screen interface, the manner is desirable to be selected in a popup menu which appears on the screen. In this paper, a position determination of the popup menu using a low cost head tracker is introduced. A fixation point of the operator on the screen is estimated by use of the head tracker. The popup menu appears in the place which the operator is looking at, and can be operated by switches added on the grip of the haptic-device. Usability of the implemented functions are confirmed in simple experiments.

Collaboration


Dive into the Masamitsu Kurisu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masayuki Okugawa

Aichi Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tetsuya Kinugasa

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryuto Iwado

Okayama University of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naoki Miyamoto

Okayama University of Science

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge