Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kuniyuki Takahashi is active.

Publication


Featured researches published by Kuniyuki Takahashi.


intelligent robots and systems | 2015

Neural network based model for visual-motor integration learning of robot's drawing behavior: Association of a drawing motion from a drawn image

Kazuma Sasaki; Hadi Tjandra; Kuniaki Noda; Kuniyuki Takahashi; Tetsuya Ogata

In this study, we propose a neural network based model for learning a robots drawing sequences in an unsupervised manner. We focus on the ability to learn visual-motor relationships, which can work as a reusable memory in association of drawing motion from a picture image. Assuming that a humanoid robot can draw a shape on a pen tablet, the proposed model learns drawing sequences, which comprises drawing motion and drawn picture image frames. To learn raw pixel data without any given specific features, we utilized a deep neural network for compressing large dimensional picture images and a continuous time recurrent neural network for integration of motion and picture images. To confirm the ability of the proposed model, we performed an experiment for learning 15 sequences comprising three types of shapes. The model successfully learns all the sequences and can associate a drawing motion from a not trained picture image and a trained picture with similar success. We also show that the proposed model self-organizes its behavior according to types shapes.


Mathematical Problems in Engineering | 2015

Tool-Body Assimilation Model Based on Body Babbling and Neurodynamical System

Kuniyuki Takahashi; Tetsuya Ogata; Hadi Tjandra; Yuki Yamaguchi; Shigeki Sugano

We propose the new method of tool use with a tool-body assimilation model based on body babbling and a neurodynamical system for robots to use tools. Almost all existing studies for robots to use tools require predetermined motions and tool features; the motion patterns are limited and the robots cannot use novel tools. Other studies fully search for all available parameters for novel tools, but this leads to massive amounts of calculations. To solve these problems, we took the following approach: we used a humanoid robot model to generate random motions based on human body babbling. These rich motion experiences were used to train recurrent and deep neural networks for modeling a body image. Tool features were self-organized in parametric bias, modulating the body image according to the tool in use. Finally, we designed a neural network for the robot to generate motion only from the target image. Experiments were conducted with multiple tools for manipulating a cylindrical target object. The results show that the tool-body assimilation model is capable of motion generation.


international conference on artificial neural networks | 2014

Tool-Body Assimilation Model Based on Body Babbling and a Neuro-Dynamical System for Motion Generation

Kuniyuki Takahashi; Tetsuya Ogata; Hadi Tjandra; Shingo Murata; Hiroaki Arie; Shigeki Sugano

We propose a model for robots to use tools without predetermined parameters based on a human cognitive model. Almost all existing studies of robot using tool require predetermined motions and tool features, so the motion patterns are limited and the robots cannot use new tools. Other studies use a full search for new tools; however, this entails an enormous number of calculations. We built a model for tool use based on the phenomenon of tool-body assimilation using the following approach: We used a humanoid robot model to generate random motion, based on human body babbling. These rich motion experiences were then used to train a recurrent neural network for modeling a body image. Tool features were self-organized in the parametric bias modulating the body image according to the used tool. Finally, we designed the neural network for the robot to generate motion only from the target image.


Robotics and Autonomous Systems | 2017

Tool-body assimilation model considering grasping motion through deep learning

Kuniyuki Takahashi; Kitae Kim; Tetsuya Ogata; Shigeki Sugano

We propose a tool-body assimilation model that considers grasping during motor babbling for using tools. A robot with tool-use skills can be useful in humanrobot symbiosis because this allows the robot to expand its task performing abilities. Past studies that included tool-body assimilation approaches were mainly focused on obtaining the functions of the tools, and demonstrated the robot starting its motions with a tool pre-attached to the robot. This implies that the robot would not be able to decide whether and where to grasp the tool. In real life environments, robots would need to consider the possibilities of tool-grasping positions, and then grasp the tool. To address these issues, the robot performs motor babbling by grasping and nongrasping the tools to learn the robots body model and tool functions. In addition, the robot grasps various parts of the tools to learn different tool functions from different grasping positions. The motion experiences are learned using deep learning. In model evaluation, the robot manipulates an object task without tools, and with several tools of different shapes. The robot generates motions after being shown the initial state and a target image, by deciding whether and where to grasp the tool. Therefore, the robot is capable of generating the correct motion and grasping decision when the initial state and a target image are provided to the robot.


intelligent robots and systems | 2015

Effective motion learning for a flexible-joint robot using motor babbling

Kuniyuki Takahashi; Tetsuya Ogata; Hiroki Yamada; Hadi Tjandra; Shigeki Sugano

We propose a method for realizing effective dynamic motion learning in a flexible-joint robot using motor babbling. Flexible-joint robots have recently attracted attention because of their adaptiveness, safety, and, in particular, dynamic motions. It is difficult to control robots that require dynamic motion. In past studies, attractors and oscillators were designed as motion primitives of an assumed task in advance. However, it is difficult to adapt to unintended environmental changes using such methods. To overcome this problem, we use a recurrent neural network (RNN) that does not require predetermined parameters. In this research, we propose a method for facilitating effective learning. First, a robot learns simple motions via motor babbling, acquiring body dynamics using a recurrent neural network (RNN). Motor babbling is the process of movement that infants use to acquire their own body dynamics during their early days. Next, the robot learns additional motions required for a target task using the acquired body dynamics. For acquiring these body dynamics, the robot uses motor babbling with its redundant flexible joints to learn motion primitives. This redundancy implies that there are numerous possible motion patterns. In comparison to a basic learning task, the motion primitives are simply modified to adjust to the task. Next, we focus on the types of motions used in motor babbling. We classify the motions into two motion types, passive motion and active motion. Passive motion involves inertia without any torque input, whereas active motion involves a torque input. The robot acquires body dynamics from the passive motion and a means of torque generation from the active motion. As a result, we demonstrate the importance of performing prior learning via motor babbling before learning a task. In addition, task learning is made more efficient by dividing the motion into two types of motor babbling patterns.


robotics and biomimetics | 2011

Handling and grasp control with additional grasping point for dexterous manipulation of cylindrical tool

Taisuke Sugaiwa; Kuniyuki Takahashi; Hiroyuki Kano; Hiroyasu Iwata; Shigeki Sugano

This study aims to construct a handling and grasp control method by multi-fingered robot hand with soft skin for the precision operation of cylindrical tools. We believe that the key to the improvement of the accuracy of the tool operation is to use an additional grasping point to reduce the tools posture fluctuation derived from the external force exerted at the tool nib. We focus two factors that cause the tools posture fluctuation. One is the rotation movement of the tool around the axis between two contacts by finger-tips. The other one is the deflection of the soft skin. We propose the effective allocation of the addition grasping point from the view point of the reduction of the tools posture fluctuation. Our control architecture is a customized combination of resolved motion rate control and hybrid control which maintains stable grasping and handles the tool along with the desired trajectory of the tool nib. It is validated through the physical tests using the actual multi-fingered robot hand that the accuracy of the tool nib trajectory is improved by proposed control method.


Advanced Robotics | 2018

Effective input order of dynamics learning tree

Chyon Hae Kim; Shohei Hama; Ryo Hirai; Kuniyuki Takahashi; Hiroki Yamada; Tetsuya Ogata; Shigeki Sugano

Abstract In this paper, we discuss about the learning performance of dynamics learning tree (DLT) while mainly focusing on the implementation on robot arms. We propose an input-order-designing method for DLT. DLT has been applied to the modeling of boat, vehicle, and humanoid robot. However, the relationship between the input order and the performance of DLT has not been investigated. In the proposed method, a developer is able to design an effective input order intuitively. The proposed method was validated in the model learning tasks on a simulated robot manipulator, a real robot manipulator, and a simulated vehicle. The first/second manipulator was equipped with flexible arm/finger joints that made uncertainty around the trajectories of manipulated objects. In all of the cases, the proposed method improved the performance of DLT.


international conference on neural information processing | 2015

Efficient Motor Babbling Using Variance Predictions from a Recurrent Neural Network

Kuniyuki Takahashi; Kanata Suzuki; Tetsuya Ogata; Hadi Tjandra; Shigeki Sugano

We propose an exploratory form of motor babbling that uses variance predictions from a recurrent neural network as a method to acquire the body dynamics of a robot with flexible joints. In conventional research methods, it is difficult to construct real robots because of the large number of motor babbling motions required. In motor babbling, different motions may be easy or difficult to predict. The variance is large in difficult-to-predict motions, whereas the variance is small in easy-to-predict motions. We use a Stochastic Continuous Timescale Recurrent Neural Network to predict the accuracy and variance of motions. Using the proposed method, a robot can explore motions based on variance. To evaluate the proposed method, experiments were conducted in which the robot learns crank turning and door opening/closing tasks after exploring its body dynamics. The results show that the proposed method is capable of efficient motion generation for any given motion tasks.


Advanced Robotics | 2017

Dynamic motion learning for multi-DOF flexible-joint robots using active–passive motor babbling through deep learning

Kuniyuki Takahashi; Tetsuya Ogata; Jun Nakanishi; Gordon Cheng; Shigeki Sugano

This paper proposes a learning strategy for robots with flexible joints having multi-degrees of freedom in order to achieve dynamic motion tasks. In spite of there being several potential benefits of flexible-joint robots such as exploitation of intrinsic dynamics and passive adaptation to environmental changes with mechanical compliance, controlling such robots is challenging because of increased complexity of their dynamics. To achieve dynamic movements, we introduce a two-phase learning framework of the body dynamics of the robot using a recurrent neural network motivated by a deep learning strategy. The proposed methodology comprises a pre-training phase with motor babbling and a fine-tuning phase with additional learning of the target tasks. In the pre-training phase, we consider active and passive exploratory motions for efficient acquisition of body dynamics. In the fine-tuning phase, the learned body dynamics are adjusted for specific tasks. We demonstrate the effectiveness of the proposed methodology in achieving dynamic tasks involving constrained movement requiring interactions with the environment on a simulated robot model and an actual PR2 robot both of which have a compliantly actuated seven degree-of-freedom arm. The results illustrate a reduction in the required number of training iterations for task learning and generalization capabilities for untrained situations. The proposed learning framework for acquiring body dynamics in two phases (pre-training and fine-tuning). In the pre-training phase, the robot acquires body dynamics with an RNN through motor babbling. We consider a sequence of active and passive motions to improve the efficiency in the learning process of the body dynamics. Then, in the fine-tuning phase, the robot performs additional learning to adjust acquired body dynamics to the target task. The objective of this strategy is to efficiently learn the desired movements to perform the given tasks with the reduction of training iterations and generalization to untrained situations with the learned body dynamics. The below listed points should be captured along with the Graphical abstract image Dynamic motion tasks for robots with flexible joints having multi-DOFs Pre-training with motor babbling and fine-tuning with additional learning Active and passive exploratory motions in motor babbling.


international conference on artificial neural networks | 2016

Body model transition by tool grasping during motor babbling using deep learning and RNN

Kuniyuki Takahashi; Hadi Tjandra; Tetsuya Ogata; Shigeki Sugano

We propose a method of tool use considering the transition process of a body model from not grasping to grasping a tool using a single model. In our previous research, we proposed a tool-body assimilation model in which a robot autonomously learns tool functions using a deep neural network (DNN) and recurrent neural network (RNN) through experiences of motor babbling. However, the robot started its motion already holding the tools. In real-life situations, the robot would make decisions regarding grasping (handling) or not grasping (manipulating) a tool. To achieve this, the robot performs motor babbling without the tool pre-attached to the hand with the same motion twice, in which the robot handles the tool or manipulates without graping it. To evaluate the model, we have the robot generate motions with showing the initial and target states. As a result, the robot could generate the correct motions with grasping decisions.

Collaboration


Dive into the Kuniyuki Takahashi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge