Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kotaro Nagahama is active.

Publication


Featured researches published by Kotaro Nagahama.


ieee/sice international symposium on system integration | 2014

Bottom dressing by a life-sized humanoid robot provided failure detection and recovery functions

Kimitoshi Yamazaki; Ryosuke Oya; Kotaro Nagahama; Kei Okada; Masayuki Inaba

This paper describes dressing assistance by an autonomous robot. We especially focus on a dressing action that is particularly problematic for disabled people: the pulling of a bottom along the legs. To avoid injuring the subjects legs, the robot should recognize the state of the manipulated clothing. Therefore, while handling the clothing, the robot is supplied with both visual and force sensory information. Based on the them, dressing failure is detected and recovery from the failure is planned automatically. The effectiveness of the proposed approach is implemented and validated in a life-sized humanoid robot.


intelligent robots and systems | 2015

Reasoning-based vision recognition for agricultural humanoid robot toward tomato harvesting

Xiangyu Chen; Krishneel Chaudhary; Yoshimaru Tanaka; Kotaro Nagahama; Hiroaki Yaguchi; Kei Okada; Masayuki Inaba

We present a vision cognition framework for tomato harvesting humanoid robot based on geometrical and physical reasoning. Inspired from the natural human harvesting behaviour, our goal is to build a humanoid robot to pick tomatoes autonomously or with minimal human efforts. The proposed vision approach uses fusion of calibrated observation data from two RGB-D sensors installed on the head and the hand of the humanoid. We observe the natural human harvesting behaviour and equip our robot with similar grippers to follow the same picking processes for a specific fruit. In the vision approach, we mainly focus on modelling fruits in one branch and then estimating the pedicel direction of each fruit in a branch. Through pointcloud model segmentation, the primitive shape model of each fruit can be obtained and we consider a simple fact that crops in one branch should remain stable with respect to gravity and interaction forces from neighbouring crops in the branch. According to this assumption, a probabilistic model is created and the picking order in the branch is assigned under the evaluated geometrical structure. In the experiments, we tested harvesting of real tomatoes on actual branches and evaluated the successful harvesting rate.


Paladyn: Journal of Behavioral Robotics | 2013

Cooking Behavior with Handling General Cooking Tools based on a System Integration for a Life-sized Humanoid Robot

Yoshiaki Watanabe; Kotaro Nagahama; Kimitoshi Yamazaki; Kei Okada; Masayuki Inaba

Abstract This paper describes a system integration for a life-sized robot working at a kitchen. On cooking tasks, there should be various tools and foods, and cooking table may have reflective surface with blots and scratch. Recognition functions should be robust to noises derived from them. As other problems, cooking behaviors impose motion sequences by using whole body of the robot. For instance, while cutting a vegetable, the robot has to hold one hand against the vegetable even if another hand with a knife should be moved for the cutting. This motion requires to consider full articulation of the robot simultaneously. That is, we have difficulties against both recognition and motion generation. In this paper we propose recognition functions that are to detect kitchen tools such as containers and cutting boards. These functions are improved to overcome the influence of reflective surface, and combination shape model with task knowledge is also proposed. On the other hand, we pointed out the importance of the use of torso joints while dual arm manipulation. Our approach enables the robot to keep manipulability of both arms and viewing field of a head. Based on these products, we also introduce an integrated system incorporating recognition modules and motion generation modules. The effectiveness of the system was proven through some cooking applications.


International Journal of Advanced Robotic Systems | 2016

Bottom Dressing by a Dual-arm Robot Using a Clothing State Estimation Based on Dynamic Shape Changes

Kimitoshi Yamazaki; Ryosuke Oya; Kotaro Nagahama; Kei Okada; Masayuki Inaba

This paper describes an autonomous robots method of dressing a subject in clothing. Our target task is to dress a person in the sitting pose. We especially focus on the action whereby a robot automatically pulls a pair of trousers up the subjects legs, an action frequently needed in dressing assistance. To avoid injuring the subjects legs, the robot should be able to recognize the state of the manipulated clothing. Therefore, while handling the clothing, the robot is supplied with both visual and tactile sensory information. A dressing failure is detected by the visual sensing of the behaviour of optical flows extracted from the clothings movements. The effectiveness of the proposed approach is implemented and validated in a life-sized humanoid robot.


ieee-ras international conference on humanoid robots | 2015

A design of 4-legged semi humanoid robot aero for disaster response task

Hiroaki Yaguchi; Kazuhiro Sasabuchi; Wesley P. Chan; Kotaro Nagahama; Takuya Saiki; Yasuto Shiigi; Masayuki Inaba

In this paper, we introduce the design of the 4-legged semi humanoid robot platform Aero. Concepts of Aero are; Light weight, Simple structure, and Robust, and its objective is safety for human cooperation as a robot platform. We devised the placements of the motors by utilizing smart actuators that use ball spline to actuate cross-linked joints, and to avoid crushing objects, the motors used are also low power and can down power when overloaded. Software is developed within a very short period of time via model-based development using the Euslisp robot model. We evaluate this robot platform in a disaster response challenge competition to show its usability.


international conference on mechatronics and automation | 2011

End point tracking for a moving object with several attention regions by composite vision system

Kotaro Nagahama; Tomohiro Nishino; Mitsuharu Kojima; Kimitoshi Yamazaki; Kei Okada; Masayuki Inaba

This paper describes an approach of multi-target tracking for gaze control to know the motions of end points on a moving object. In order to track several moving parts from image streams, three different types of tracker to observe temporal, spatial and appearance changes are combined. Also, we developed a composite vision system on which two wide-angle cameras and two zoom-enabled cameras are mounted. We tested the gaze control system and the head system by observing a human working in daily environment. This results showed the effectiveness of our approach.


intelligent robots and systems | 2016

Development of an autonomous tomato harvesting robot with rotational plucking gripper

Hiroaki Yaguchi; Kotaro Nagahama; Takaomi Hasegawa; Masayuki Inaba

In this paper, we present a design and development an autonomous tomato harvesting robot. We developed a harvesting robot with stereo camera which can measure depth in short range and direct sunlight and plucking gripper using the infinity rotational joint. We also evaluate the developed robot through harvesting in the tomato robot competition and the real farm. In the tomato robot competition, the robot harvested tometoes from tomato clusters and tomato trees, harvesting speed was about 80[s/fruit] and successful rate was about 60%. In the real farm we evaluated the robot with tomato trees in semi-outdoor environment to show the effectiveness and robustness under direct sunlight. According to the result of harvesting with real tomatoes, we improved the robot motion and finally harvesting speed was up to 23[s/fruit], however the gripper may grasp multiple fruits in case of very cluttered cluster and the calyx also may be broken when the stem angle is deep from the rotation axis. To avoid this situation, a grasp state estimation of the gripper and simultaneous recognition fruit and stem positions are next problems to improve the harvesting successful rate.


ieee-ras international conference on humanoid robots | 2012

Hierarchical estimation of multiple objects from proximity relationships arising from tool manipulation

Kotaro Nagahama; Kimitoshi Yamazaki; Kei Okada; Masayuki Inaba

In this paper, we propose a novel method to estimate a tools function for a humanoid robot observing a person using a tool. In this method, two types of evaluations are integrated: the relational hierarchy between a tool and objects, and their accompanying movements. This enables to estimate not only moving or conveying function but also cutting or stinging function. To estimate the relational hierarchy, overlapping regions of multiple objects are explicitly tracked based on their viewable regions. We tested this system by basic experiments in which a robot tracked a tool and an object, and estimated functions of the tool in a kitchen environment.


international conference on advanced intelligent mechatronics | 2016

Learning-based object abstraction method from simple instructions for human support robot HSR

Kotaro Nagahama; Hiroaki Yaguchi; Hirohito Hattori; Kiyohiro Sogen; Takashi Yamamoto; Masayuki Inaba

This study proposes the development of a simple remote-controlled daily assistive robot to assist physically challenged individuals. Specifically, we present a method for target object selection using a single click on a graphical user interface. Using this information, the robot can automatically estimate the unknown target object region to plan to grasp and fetch the object. The challenging task is to correctly estimate the region of the object of interest. The proposed system is implemented using the following framework for estimating the region of the object. First, the robot automatically estimates the object region based on user input. Second, the user can intervene by interactively drawing and erasing the estimated region while the system sequentially updates the estimation method based only on the users correction. The advantage of this system is that only limited inputs are required from the user, a feature that is useful for handicapped users. Moreover, we introduce (1) graph cuts, comprising “HyperPixels” and three-dimensional information, to enable the system to recognize the rich features around the user-specified region for robust segmentation, (2) interactive correction of the automatically estimated object region while the system calculates good graph parameters for the correct estimation, and (3) recall and use of the learned parameters for the estimation based on the database of features around the clicked point.


ieee/sice international symposium on system integration | 2016

Retrieving unknown objects using robot in-the-loop based interactive segmentation

Krishneel Chaudhary; Chiwun Au; Wesley P. Chan; Kotaro Nagahama; Hiroaki Yaguchi; Kei Okada; Masayuki Inaba

For a robot to operate efficiently in a human centered environment, it should be able to interact and learn unknown objects autonomously. Such capabilities will enable a robot to enrich its internal knowledge of the environment without human assistance. However, a crucial limitation of robots is their inability to comprehend representations of novel objects without priors. Human efforts are required to provide pertinent prerequisites for learning novel objects. In this paper, we exploit the visual and manipulative capabilities of a mobile robot to interact with an unknown cluttered scene on a support plane and retrieve objects requested by humans. The object to retrieve may support other unknown objects, which the robot has to identify and carefully remove. The boundaries of objects in a cluttered scene are estimated using 3D geometrical relationships between the surface normals. Using this estimate, the robot interacts with the objects through graspless action (push), and visual changes to the scene are used to revamp the initial hypothesis for final object region estimation. The estimated region is then used for grasping and removing the supported objects in order to retrieve the target object. The presented approach is model free and requires no prior object knowledge.

Collaboration


Dive into the Kotaro Nagahama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge