Youhei Kakiuchi
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Youhei Kakiuchi.
intelligent robots and systems | 2010
Youhei Kakiuchi; Ryohei Ueda; Kazuya Kobayashi; Kei Okada; Masayuki Inaba
We propose a strategy for a robot to operate in an environment with movable obstacles using only onboard sensors, with no previous knowledge of the objects in that environment. Movable obstacles are detected using active sensing and a color range sensor, and when an obstacle is moved, the perception of the environment is reconstructed.
international conference on robotics and automation | 1998
Fumio Kanehiro; Ikuo Mizuuchi; Kotaro Koyasako; Youhei Kakiuchi; Masayuki Inaba; Hirochika Inoue
In this paper, a second generation remote-brained humanoid robot is presented which is developed for research on whole body action. Humanoid robots are important as a platform for the integration of techniques and algorithms acquired from research on manipulators, legged robots and so on. And they have new problems such as how to acquire, memorize, select and carry out various motions which use their whole bodies efficiently. In order to do research on these problems, a good robot body, which has whole body and enough performance for walking and getting up when it falls down is necessary. At the same time, a powerful brain is necessary which can be evolved through the body. As a solution to these demands, remote-brained approach was proposed and several humanoid robots was developed. Using these robots, several researches were done. For example, a brain framework called BeNet, an action acquisition using GA and NN and so on. They were done using a simple wireless connection. Interface between robot brain and its body to concentrate on a high-level problem. However this interface limited actions the robot can do simultaneously. In this paper, this old interface is taken to the next step. New interface has multiple actuator control methods which are switched on demand, and an onbody microprocessor network which controls actuators, measures sensors and interacts with a brain. Finally a new humanoid robot is developed on this interface.
international conference on robotics and automation | 2012
Shunichi Nozawa; Youhei Kakiuchi; Kei Okada; Masayuki Inaba
Pushing heavy and large objects in a plane requires generating correct operational forces that compensate for unpredictable ground-object friction forces. This is a challenge because the reaction forces from the heavy object can easily cause a humanoid robot to slip at its feet or lose balance and fall down. Although previous research has addressed humanoid robot balancing problems to prevent falling down while pushing an object, there has been little discussion about the problem of avoiding slipping due to the reaction forces from the object. We extend a full-body balancing controller by simultaneously controlling the reaction forces of both hands using dual-arm force control. The main contribution of this paper is a method to calculate dual-arm reference forces considering the moments around the vertical axis of the humanoid robot and objects. This method involves estimating friction forces based on force measurements and controlling reaction forces to follow the reference forces. We show experimental results on the HRP-2 humanoid robot pushing a 90[kg] wheelchair.
intelligent robots and systems | 2012
Shunichi Nozawa; Iori Kumagai; Youhei Kakiuchi; Kei Okada; Masayuki Inaba
Manipulation of structured objects connected to the environment by a kinematics chain involves two problems: (a) The objects have movable directions and unmovable directions. An undesired reaction force in the unmovable directions prevents a robot from successful manipulation; (b) The reaction forces from the objects could fluctuate during manipulation. Related works have enabled robots to manipulate objects by integrating position control in movable directions and force control in unmovable directions at the hands. However, in the case of a humanoid robot, too large undesired reaction forces in movable directions cause the robots falling down and slipping. In this paper, we propose a controller system controlling reaction forces at the hands and successively updating reference forces based on reaction forces. For problem (a), we apply force control both to the movable and unmovable directions in order to satisfy both maintaining full-body balance and achieving manipulation. For problem (b), the update of the reference forces enables the humanoid robot to adapt to fluctuation of the reaction forces. We show experimental results on the cmanipulating four doors and a drawer.
intelligent robots and systems | 2010
Shunichi Nozawa; Ryohei Ueda; Youhei Kakiuchi; Kei Okada; Masayuki Inaba
In this paper we propose a new method to manipulate heavy objects for a humanoid robot. In this method the manipulation strategy is determined based on on-line estimation of the operational force. We integrate these functions with a real-time controller that controls the external force and maintains full-body balance. The feature point of our work is that since a full-body control system includes switching of the manipulation strategy based on the operational force estimated on-line the system enables a humanoid robot to manipulate heavy objects as well as light objects. The effectiveness of our whole system is confirmed in our experiments, in which a humanoid robot manipulates up to 12[kg] while estimating the objects weight.
international conference on robotics and automation | 2011
Youhei Kakiuchi; Ryohei Ueda; Kei Okada; Masayuki Inaba
A humanoid robot working in a household environment with people needs to localize and continuously update the locations of obstacles and manipulable objects. Achieving such system, requires strong perception method to efficiently update the frequently changing environment.
international conference on robotics and automation | 2012
Atsushi Tsuda; Youhei Kakiuchi; Shunichi Nozawa; Ryohei Ueda; Kei Okada; Masayuki Inaba
Humanoid robots working in a household environment need 3D geometric shape models of objects for recognizing and managing them properly. In this paper, we make humanoid robots creating models by themselves with dual-arm re-grasping (Fig.1). When robots create models by themselves, they should know how and where they can grasp objects, how their hands occlude object surfaces, and when they have seen every surface on an object. In addition, to execute efficient observation with less failure, it is important to reduce the number of re-grasping. Of course when the shape of objects is unknown, it is difficult to get a sequence of grasp positions which fulfills these conditions. This determination problem of a sequence of grasp positions can be expressed through a graph search problem. To solve this graph, we propose a heuristic method for selecting the next grasp position. This proposed method can be used for creating object models when 3D shape information is updated on-line. To evaluate it, we compare the result of the re-grasping sequence from this method with the optimal sequence coming out of breadth first search which use 3D shape information. Also, we propose an observation system with dual-arm re-grasping considering the points when humanoid robots execute observation in the real world. Finally, we show the experiment results of construction of 3D shape models in the real world using the heuristic method and the observation system.
ieee-ras international conference on humanoid robots | 2016
Yuki Asano; Toyotaka Kozuki; Soichi Ookubo; Masaya Kawamura; Shinsuke Nakashima; T. Katayama; Iori Yanokura; Toshinori Hirose; Kento Kawaharazuka; Shogo Makino; Youhei Kakiuchi; Kei Okada; Masayuki Inaba
We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.
ieee-ras international conference on humanoid robots | 2012
Iori Kumagai; Kazuya Kobayashi; Shunichi Nozawa; Youhei Kakiuchi; Tomoaki Yoshikai; Kei Okada; Masayuki Inaba
Recognizing environmental contact on whole body of a humanoid robot can be very advantageous to work with people in humans environment. In the tasks with environmental contacts, it is important as an interface with the environment to detect pushing, shearing and twist on the whole body of a robot such that it gets to know its current state and what to do next. In this paper, we describe a full body soft tactile sensor suit for a humanoid robot and an algorithm to calculate pushing, shearing, and twist for each sensor unit. These sensors are small muti-axis sensors with urethane structure and they can be placed densely on the body of a humanoid robot. We arranged 347 multi-axis soft tactile sensors on a humanoid robot imitating a human tactile sense to detect contact states. Then, we calculate a deformation vector for each muti-axis soft tactile sensor and detect the three contact states using deformation moment and average of deformation vectors in the contact surface consisting of soft tactile sensors. Finally, we confirmed the validity of the full body tactile suit and contact state detector by experiments of sitting on a wheelchair and passing object between a human and a robot.
advanced robotics and its social impacts | 2011
Haseru Chen; Youhei Kakiuchi; Manabu Saito; Kei Okada; Masayuki Inaba
This paper presents a user interface for manipulating structured furniture and electric equipments based on view-based multi-touch gesture interface and demonstrational mechanism for action candidates. The contribution of this papers is summarized as follows: 1) we define multi-touch gesture for push, pull and rotate manipulation of the robot. 2) We propose demonstrational feedback mechanism of daily environment manipulation. 3) In order to obtain a 3D point that corresponds to the user-touched point on the interface, we show a method to estimate 3D points from screen points of the robots view images. A prototype system has been implemented using iPad browser and we have evaluated the prototype system on our office and kitchen within environment and showed some preliminary results on usability assessment.