Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Natsuki Yamanobe is active.

Publication


Featured researches published by Natsuki Yamanobe.


international conference on robotics and automation | 2012

Pick and place planning for dual-arm manipulators

Kensuke Harada; Torea Foissotte; Tokuo Tsuji; Kazuyuki Nagata; Natsuki Yamanobe; Akira Nakamura; Yoshihiro Kawai

This paper proposes a method for planning the pick-and-place motion of an object by dual-arm manipulators. Our planner is composed of the offline and the online phases. The offline phase generates a set of regions on the object and the environment surfaces and calculates several parameters needed in the online phase. In the online phase, the planner selects a grasping pose of the robot and a putting posture of the object by searching for the regions calculated in the offline phase. By using the proposed method, we can also plan the trajectory of the robot, and the regrasping strategy of the dual-arm. Here, the putting posture of the object can be planned by considering stability of the object placed on the environment. The effectiveness of the proposed method is confirmed by simulation and experimental results by using the dual-arm robot NX-HIRO.


intelligent robots and systems | 2010

Picking up an indicated object in a complex environment

Kazuyuki Nagata; Takashi Miyasaka; Dragomir N. Nenchev; Natsuki Yamanobe; Kenichi Maruyama; Satoshi Kawabata; Yoshihiro Kawai

This paper presents a grasping system for picking up an indicated object in a complex real-world environment using a parallel jaw gripper. The proposed grasping scheme comprises the following three main steps: 1) A user indicates a target object and provides the system with a task instruction on how to grasp it, 2) the system acquires geometric information about the target object and constructs a 3D environment model around the target by stereo vision using the information obtained from the task instruction, and 3) the system finds a grasp point based on grasp evaluation using the acquired information. As an example of the scheme, we examined the picking up of a cylindrical object by grasping at the brim. An important and advantageous feature of this scheme is that the user can easily instruct the robot on how to perform the object-picking task through simple clicking operations, and the robot can execute the task without exact models of the target object and the environment being available in advance.


international conference on robotics and automation | 2013

Probabilistic approach for object bin picking approximated by cylinders

Kensuke Harada; Kazuyuki Nagata; Tokuo Tsuji; Natsuki Yamanobe; Akira Nakamura; Yoshihiro Kawai

This paper proposes a method for bin-picking for objects without assuming the precise geometrical model of objects. We consider the case where the shape of objects are not uniform but are similarly approximated by cylinders. By using the point cloud of a single object, we extract the probabilistic properties with respect to the difference between an object and a cylinder and consider applying the probabilistic properties to the pick-and-place motion planner of an object stacked on a table. By using the probabilistic properties, we can also realize the contact state where a finger maintain contact with the target object while avoiding contact with other objects. We further consider approximating the region occupied by fingers by a rectangular parallelepiped. The pick-and-place motion is planned by using a set of regions in combination with the probabilistic properties. Finally, the effectiveness of the proposed method is confirmed by some numerical examples and experimental result.


robotics and biomimetics | 2011

Grasp planning for parallel grippers with flexibility on its grasping surface

Kensuke Harada; Tokuo Tsuji; Kazuyuki Nagata; Natsuki Yamanobe; Kenichi Maruyama; Akira Nakamura; Yoshihiro Kawai

This paper proposes a method for planning a grasping posture for a parallel gripper attached at the tip of a robot manipulator. In order to robustly grasp several objects with various shapes, we consider a gripper having flexible sheets attached at the finger surface. We show that, by constructing a set of triangular mesh of the grasped objects polygon model, we can plan the grasping posture taking the flexibility of the grasping surface into consideration. We also show that we can define several parameters used when planning the grasping posture for each set. The effectiveness of the proposed method is verified by numerical examples and experimental results.


intelligent robots and systems | 2005

Optimization of damping control parameters for cycle time reduction in clutch assembly

Natsuki Yamanobe; Hiromitsu Fujii; Yusuke Maeda; Tamio Arai; Atsushi Watanabe; Tetsuaki Kato; Takashi Sato; Kokoro Hatanaka

Parameter tuning of force control is important for successful robotic assembly and ensuring high efficiency operations. In this paper, we present a method of designing damping control parameters for general assembly operations by considering the cycle time. In this method, optimal parameters are obtained through iterative simulations of assembly operations because it is difficult to estimate the cycle time analytically. We have applied the method to clutch assembly, which is a complicated insertion of a splined axis into movable toothed plates, and demonstrate how the operation can be sped up using the obtained parameters.


international journal of mechatronics and automation | 2013

Towards snap sensing

Juan Rojas; Kensuke Harada; Hiromu Onda; Natsuki Yamanobe; Eiichi Yoshida; Kazuyuki Nagata; Yoshihiro Kawai

Automating snap assemblies is highly desirable but challenging due to their varied geometrical configurations and elastic components. A key aspect to automating snap assemblies is robot state estimation and corrective motion generation, here defined as snap sensing. While progress is being made, there are yet no robust systems that allow for snap sensing. To this end we have integrated a framework that consists of a control strategy and control framework that generalises to cantilever snaps of varying geometrical complexity. We have also integrated a robot state verification method (RCBHT) that encodes FT data to yield high-level intuitive behaviours and perform output verification. Optimisation procedures and Bayesian filtering have been included in the RCBHT to increase robustness and granularity. The system provides belief states for higher level behaviours allowing probabilistic state estimation and outcome verification. In this work, preliminary assembly failure characterisation has been conducted and provides insights into assembly failure dynamics. The results, though still in simulation, are promising as the framework has effectively executed cantilever snap assemblies and robust robot state estimation with parts of varying complexity in two different robotic systems.


robotics and biomimetics | 2010

Grasp planning for everyday objects based on primitive shape representation for parallel jaw grippers

Natsuki Yamanobe; Kazuyuki Nagata

Grasping various objects is a key function required for service robots. In this paper, seven kinds of shape primitives utilized for abstracting objects to be grasped are proposed for efficient grasp planning. These shape primitives are defined in order to provide an appropriate set of grasping configurations of parallel jaw grippers before planning. Each shape primitive has basic grasping configurations, each of which is parameterized. Fist, a target object is modeled by using one or more shape primitives. Then, possible grasping configurations are pruned based on the information of task environment and robot hand to be used. After sampling of candidate grasps, the most suitable grasping position and orientation is chosen by evaluating and ranking the quality of the candidates. We show an experimental result of the application of this shape primitive based grasp planning method to a mobile manipulator.


international conference on robotics and automation | 2014

Stability of Soft-Finger Grasp under Gravity

Kensuke Harada; Tokuo Tsuji; Soichiro Uto; Natsuki Yamanobe; Kazuyuki Nagata; Kosei Kitagaki

We discuss grasp stability under gravity where each finger makes soft-finger contact with an object. By clustering polygon models of a finger and an object, the contact area between a finger and an object is obtained as the common area between an object cluster and a finger cluster. Then, by assuming the Winkler elastic foundation, the pressure distribution within the contact area is obtained. By using this pressure distribution, we show that we can judge grasp stability under soft-finger contact. We further consider defining a quality measure of a soft-finger grasp by assuming that although the gravitational force is applied to an object, the direction of gravity is unknown. To demonstrate the effectiveness of the proposed approach, we show several numerical examples.


Robotics and Autonomous Systems | 2014

Validating an object placement planner for robotic pick-and-place tasks

Kensuke Harada; Tokuo Tsuji; Kazuyuki Nagata; Natsuki Yamanobe; Hiromu Onda

This paper proposes an object placement planner for a grasped object during pick-and-place tasks. The proposed planner automatically determines the pose of an object stably placed near a user assigned point on an environment surface. The proposed method first constructs a polygon model of the surrounding environment, and then clusters the polygon model of both the environment and the object where each cluster is approximated by a planar region. The placement of the object can be determined by selecting a pair of clusters between the object and the environment. We further impose several conditions to determine the pose of the object placed on the environment. We show that we can determine the position/orientation of the object placed on the environment for several cases such as hanging a mug cup on a bar. The effectiveness of the proposed research is confirmed through several numerical examples.


intelligent robots and systems | 2012

Object placement planner for robotic pick and place tasks

Kensuke Harada; Tokuo Tsuji; Kazuyuki Nagata; Natsuki Yamanobe; Hiromu Onda; Takashi Yoshimi; Yoshihiro Kawai

This paper proposes an object placement planner for a grasped object during pick-and-place tasks. The proposed planner automatically determines the pose of an object stably placed near a user assigned point on an environment surface. The proposed method first constructs a polygon model of the surrounding environment, and then clusters the polygon model of both the environment and the object where each cluster is approximated by a planar region. The placement of the object can be determined by selecting a pair of clusters between the object and the environment. We further impose several conditions to determine the pose of the object placed on the environment. We show that we can determine the position/orientation of the object placed on the environment for several cases such as hanging a mug cup on a bar. The effectiveness of the proposed research is confirmed through several numerical examples.

Collaboration


Dive into the Natsuki Yamanobe's collaboration.

Top Co-Authors

Avatar

Kazuyuki Nagata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshihiro Kawai

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Akira Nakamura

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hiromu Onda

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tokuo Tsuji

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar

Yujin Wakita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Rojas

Sun Yat-sen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge