Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takamitsu Matsubara is active.

Publication


Featured researches published by Takamitsu Matsubara.


The International Journal of Robotics Research | 2008

Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot

Gen Endo; Jun Morimoto; Takamitsu Matsubara; Jun Nakanishi; Gordon Cheng

In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.


Robotics and Autonomous Systems | 2006

Learning CPG-based biped locomotion with a policy gradient method

Takamitsu Matsubara; Jun Morimoto; Jun Nakanishi; Masa-aki Sato; Kenji Doya

Recently, CPG-based controllers have been widely explored to achieve robust biped locomotion. However, this approach has difficulties in tuning open parameters in the controller. In this paper, we present a learning framework for CPG-based biped locomotion with a policy gradient method. We demonstrate that appropriate sensory feedback in the CPG-based control architecture can be acquired using the proposed method within a thousand trials by numerical simulations. We analyze linear stability of a periodic orbit of the acquired biped walking considering a return map. Furthermore, we apply the learned controllers in numerical simulations to our physical 5-link robot in order to empirically evaluate the effectiveness of the proposed framework. Experimental results suggest the robustness of the acquired controllers against environmental changes and variations in the mass properties of the robot


IEEE Transactions on Biomedical Engineering | 2013

Bilinear Modeling of EMG Signals to Extract User-Independent Features for Multiuser Myoelectric Interface

Takamitsu Matsubara; Jun Morimoto

In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors:1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard non-multiuser interfaces, as the result of a two-sample t-test at a significance level of 1%.


intelligent robots and systems | 2011

XoR: Hybrid drive exoskeleton robot that can balance

Sang-Ho Hyon; Jun Morimoto; Takamitsu Matsubara; Tomoyuki Noda; Mitsuo Kawato

We propose a novel exoskeleton robot prototype aimed at a brain-machine interface and rehabilitation for postural control for elderly people, people with spinal cord injury, stroke patients, and others with similar needs. By arranging pneumatic muscles with electric motors in a optimal way, one can achieve both weight-reduction and torque-controllability. Its anthropomorphic design and torque-controllability enable users to implement and test various rehabilitation/compensation programs consistent with human motor control and learning mechanism. Hybrid drive itself is not new, but its specialized application to lightweight exoskeleton is novel. This paper reports the design and development of the robot, particularly addressing a hybrid drive for load-bearing tasks such as standing and postural maintenance. The experimental data as well as the attached videos demonstrate the effectiveness of the proposed system.


ieee-ras international conference on humanoid robots | 2011

Reinforcement learning of clothing assistance with a dual-arm robot

Tomoya Tamei; Takamitsu Matsubara; Akshara Rai; Tomohiro Shibata

This study aims at robotic clothing assistance as it is yet an open field for robotics despite it is one of the basic and important assistance activities in daily life of elderly as well as disabled people. The clothing assistance is a challenging problem since robots must interact with non-rigid clothes generally represented in a high-dimensional space, and with the assisted person whose posture can vary during the assistance. Thus, the robot is required to manage two difficulties to perform the task of the clothing assistance: 1) handling of non-rigid materials and 2) adaptation of the assisting movements to the assisted persons posture. To overcome these difficulties, we propose to use reinforcement learning with the cloths state which is low-dimensionally represented in topology coordinates, and with the reward defined in the low-dimensional coordinates. With our developed experimental system, for T-shirt clothing assistance, including an anthropomorphic dual-arm robot and a soft mannequin, we demonstrate the robot quickly learns a suitable arm motion for putting the mannequins head into a T-shirt.


international conference on robotics and automation | 2005

Learning Sensory Feedback to CPG with Policy Gradient for Biped Locomotion

Takamitsu Matsubara; Jun Morimoto; Jun Nakanishi; Masa-aki Sato; Kenji Doya

This paper proposes a learning framework for a CPG-based biped locomotion controller using a policy gradient method. Our goal in this study is to develop an efficient learning algorithm by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller in the CPG-based controller can be acquired using the proposed method within a few thousand trials by numerical simulations. Furthermore, we implement the learned controller on the physical biped robot to experimentally show that the learned controller successfully works in the real environment.


international conference on neural information processing | 2010

Learning parametric dynamic movement primitives from multiple demonstrations

Takamitsu Matsubara; Sang-Ho Hyon; Jun Morimoto

This paper proposes a novel approach to learn highly scalable Control Policies (CPs) of basis movement skills from multiple demonstrations. In contrast to conventional studies with a single demonstration, i.e., Dynamic Movement Primitives (DMPs) [1], our approach efficiently encodes multiple demonstrations by shaping a parametric-attractor landscape in a set of differential equations. This approach allows the learned CPs to synthesize novel movements with novel motion styles by specifying the linear coefficients of the bases as parameter vectors without losing useful properties of DMPs, such as stability and robustness against perturbations. For both discrete and rhythmic movement skills, we present a unified learning procedure for learning a parametric-attractor landscape from multiple demonstrations. The feasibility and highly extended scalability of DMPs are demonstrated on an actual dual-arm robot.


international conference on robotics and automation | 2010

Optimal Feedback Control for anthropomorphic manipulators

Djordje Mitrovic; Sho Nagashima; Stefan Klanke; Takamitsu Matsubara; Sethu Vijayakumar

We study target reaching tasks of redundant anthropomorphic manipulators under the premise of minimal energy consumption and compliance during motion. We formulate this motor control problem in the framework of Optimal Feedback Control (OFC) by introducing a specific cost function that accounts for the physical constraints of the controlled plant. Using an approximative computational optimal control method we can optimally control a high-dimensional anthropomorphic robot without having to specify an explicit inverse kinematics, inverse dynamics or feedback control law. We highlight the benefits of this biologically plausible motor control strategy over traditional (open loop) optimal controllers: The presented approach proves to be significantly more energy efficient and compliant, while being accurate with respect to the task at hand. These properties are crucial for the control of mobile anthropomorphic robots, that are designed to interact safely in a human environment. To the best of our knowledge this is the first OFC implementation on a high-dimensional (redundant) manipulator.


intelligent robots and systems | 2010

Learning Stylistic Dynamic Movement Primitives from multiple demonstrations

Takamitsu Matsubara; Sang-Ho Hyon; Jun Morimoto

In this paper, we propose a novel concept of movement primitives called Stylistic Dynamic Movement Primitives (SDMPs) for motor learning and control in humanoid robotics. In the SDMPs, a diversity of styles in human motion observed through multiple demonstrations can be compactly encoded in a movement primitive, and this allows style manipulation of motion sequences generated from the movement primitive by a control variable called a style parameter. Focusing on discrete movements, a model of the SDMPs is presented as an extension of Dynamic Movement Primitives (DMPs) proposed by Ijspeert et al. [1]. A novel learning procedure of the SDMPs from multiple demonstrations, including a diversity of motion styles, is also described. We present two practical applications of the SDMPs, i.e., stylistic table tennis swings and obstacle avoidance with an anthropomorphic manipulator.


intelligent robots and systems | 2014

Object manifold learning with action features for active tactile object recognition

Daisuke Tanaka; Takamitsu Matsubara; Kentaro Ichien; Kenji Sugimoto

In this paper, we consider an object recognition problem based on tactile information using a robot hand. The robot performs an exploratory action to the object to obtain the tactile information, however, poorly designed actions may not be sufficiently informative. In contrast, if we could collect sample data by sequentially performing informative actions, i.e., active learning, the required time would be drastically reduced. To this end, we propose a novel approach for active tactile object recognition. Our approach combines both an active learning scheme and a nonlinear dimensionality reduction method. We first extracts the object manifold, each coordinate of which represents an object, from tactile sensor data and action features using Gaussian Process Latent Variable Models. At the same time, a probabilistic model of the observed data related to the action and the object are learned. Then, with the learned model, optimally-informative exploratory actions can be computed sequentially, and performed to efficiently collect the data for recognition. We show experimental results that verify the effectiveness of our proposed method with synthetic data and a real robot.

Collaboration


Dive into the Takamitsu Matsubara's collaboration.

Top Co-Authors

Avatar

Jun Morimoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kenji Sugimoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yunduan Cui

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masatsugu Kidode

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Daisuke Tanaka

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Morimoto

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge