Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gordon Cheng is active.

Publication


Featured researches published by Gordon Cheng.


Robotics and Autonomous Systems | 2004

Learning from demonstration and adaptation of biped locomotion

Jun Nakanishi; Jun Morimoto; Gen Endo; Gordon Cheng; Stefan Schaal; Mitsuo Kawato

In this paper, we report on our research for learning biped locomotion from human demonstration. Our ultimate goal is to establish a design principle of a controller in order to achieve natural human-like locomotion. We suggest dynamical movement primitives as a CPG of a biped robot, an approach we have previously proposed for learning and encoding complex human movements. Demonstrated trajectories are learned through the movement primitives by locally weighted regression, and the frequency of the learned trajectories is adjusted automatically by a novel frequency adaptation algorithm based on phase resetting and entrainment of oscillators. Numerical simulations demonstrate the effectiveness of the proposed locomotion controller.


IEEE Transactions on Robotics | 2007

Full-Body Compliant Human–Humanoid Interaction: Balancing in the Presence of Unknown External Forces

Sang-Ho Hyon; Joshua G. Hale; Gordon Cheng

This paper proposes an effective framework of human-humanoid robot physical interaction. Its key component is a new control technique for full-body balancing in the presence of external forces, which is presented and then validated empirically. We have adopted an integrated system approach to develop humanoid robots. Herein, we describe the importance of replicating human-like capabilities and responses during human-robot interaction in this context. Our balancing controller provides gravity compensation, making the robot passive and thereby facilitating safe physical interactions. The method operates by setting an appropriate ground reaction force and transforming these forces into full-body joint torques. It handles an arbitrary number of force interaction points on the robot. It does not require force measurement at interested contact points. It requires neither inverse kinematics nor inverse dynamics. It can adapt to uneven ground surfaces. It operates as a force control process, and can therefore, accommodate simultaneous control processes using force-, velocity-, or position-based control. Forces are distributed over supporting contact points in an optimal manner. Joint redundancy is resolved by damping injection in the context of passivity. We present various force interaction experiments using our full-sized bipedal humanoid platform, including compliant balance, even when affected by unknown external forces, which demonstrates the effectiveness of the method.


Robotics and Autonomous Systems | 2004

Discovering Optimal Imitation Strategies

Aude Billard; Yann Epars; Sylvain Calinon; Stefan Schaal; Gordon Cheng

This paper develops a general policy for learning the relevant features of an imitation task. We restrict our study to imitation of manipulative tasks or gestures. The imitation process is modeled as a hierarchical optimization system, which minimizes the discrepancy between two multi-dimensional datasets. To classify across manipulation strategies, we apply a probabilistic analysis to data in Cartesian and joint spaces. We determine a general metric that optimizes the policy of task reproduction, following strategy determination. The model successfully discovers strategies in six different imitative tasks and controls task reproduction by a full body humanoid robot.


Advanced Robotics | 2007

CB : a humanoid research platform for exploring neuroscience

Gordon Cheng; Sang-Ho Hyon; Jun Morimoto; Ales Ude; Joshua G. Hale; Glenn Colvin; Wayco Scroggin; Stephen C. Jacobsen

This paper presents a 50-d.o.f. humanoid robot, Computational Brain (CB). CB is a humanoid robot created for exploring the underlying processing of the human brain while dealing with the real world. We place our investigations within real—world contexts, as humans do. In so doing, we focus on utilizing a system that is closer to humans—in sensing, kinematics configuration and performance. We present the real-time network-based architecture for the control of all 50 d.o.f. The controller provides full position/velocity/force sensing and control at 1 kHz, allowing us the flexibility in deriving various forms of control. A dynamic simulator is also presented; the simulator acts as a realistic testbed for our controllers and acts as a common interface to our humanoid robots. A contact model developed to allow better validation of our controllers prior to final testing on the physical robot is also presented. Three aspects of the system are highlighted in this paper: (i) physical power for walking, (ii) full-body compliant control—physical interactions and (iii) perception and control—visual ocular-motor responses.


IEEE Sensors Journal | 2013

Directions Toward Effective Utilization of Tactile Skin: A Review

Ravinder Dahiya; Philipp Mittendorfer; Maurizio Valle; Gordon Cheng; Vladimir J. Lumelsky

A wide variety of tactile (touch) sensors exist today for robotics and related applications. They make use of various transduction methods, smart materials and engineered structures, complex electronics, and sophisticated data processing. While highly useful in themselves, effective utilization of tactile sensors in robotics applications has been slow to come and largely remains elusive today. This paper surveys the state of the art and the research issues in this area, with the emphasis on effective utilization of tactile sensors in robotic systems. One specific with the use of tactile sensing in robotics is that the sensors have to be spread along the robot body, the way the human skin is-thus dictating varied 3-D spatio-temporal requirements, decentralized and distributed control, and handling of multiple simultaneous tactile contacts. Satisfying these requirements pose challenges to making tactile sensor modality a reality. Overcoming these challenges requires dealing with issues such as sensors placement, electronic/mechanical hardware, methods to access and acquire signals, automatic calibration techniques, and algorithms to process and interpret sensing data in real time. We survey this field from a system perspective, recognizing the fact that the system performance tends to depend on how its various components are put together. It is hoped that the survey will be of use to practitioners designing tactile sensing hardware (whole-body or large-patch sensor coverage), and to researchers working on cognitive robotics involving tactile sensing.


The International Journal of Robotics Research | 2008

Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot

Gen Endo; Jun Morimoto; Takamitsu Matsubara; Jun Nakanishi; Gordon Cheng

In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.


IEEE Transactions on Robotics | 2011

Humanoid Multimodal Tactile-Sensing Modules

Philipp Mittendorfer; Gordon Cheng

In this paper, we present a new generation of active tactile modules (i.e., HEX-O-SKIN), which are developed in order to approach multimodal whole-body-touch sensation for humanoid robots. To better perform like humans, humanoid robots need the variety of different sensory modalities in order to interact with their environment. This calls for certain robustness and fault tolerance as well as an intelligent solution to connect the different sensory modalities to the robot. Each HEX-O-SKIN is a small hexagonal printed circuit board equipped with multiple discrete sensors for temperature, acceleration, and proximity. With these sensors, we emulate the human sense of temperature, vibration, and light touch. Off-the-shelf sensors were utilized to speed up our development cycle; however, in general, we can easily extend our design with new discrete sensors, thereby making it flexible for further exploration. A local controller on each HEX-O-SKIN preprocesses the sensor signals and actively routes data through a network of modules toward the closest PC connection. Local processing decreases the necessary network and high-level processing bandwidth, while a local analog-to-digital conversion and digital-data transfers are less sensitive to electromagnetic interference. With an active data-routing scheme, it is also possible to reroute the data around broken connections-yielding robustness throughout the global structure while minimizing wirings. To support our approach, multiple HEX-O-SKIN are embedded into a rapid-prototyped elastomer skin material and redundantly connected to neighboring modules by just four ports. The wiring complexity is shifted to each HEX-O-SKIN such that a power and data connection between two modules is reduced to four noncrossing wires. Thus, only a very simple robot-specific base frame is needed to support and wire the HEX-O-SKIN to a robot. The potential of our multimodal sensor modules is demonstrated experimentally on a robot platform.


international conference on robotics and automation | 2005

Experimental Studies of a Neural Oscillator for Biped Locomotion with QRIO

Gen Endo; Jun Nakanishi; Jun Morimoto; Gordon Cheng

Recently, there has been a growing interest in biologically inspired biped locomotion control with Central Pattern Generator (CPG). However, few experimental attempts on real hardware 3D humanoid robots have yet been made. Our goal in this paper is to present our achievement of 3D biped locomotion using a neural oscillator applied to a humanoid robot, QRIO. We employ reduced number of neural oscillators as the CPG model, along with a task space Cartesian coordinate system and utilizing entrainment property to establish stable walking gait. We verify robustness against lateral perturbation, through numerical simulation of stepping motion in place along the lateral plane. We then implemented it on the QRIO. It could successfully cope with unknown 3mm bump by autonomously adjusting its stepping period. Sagittal motion produced by a neural oscillator is introduced, and then overlapped with the lateral motion generator in realizing 3D biped locomotion on a QRIO humanoid robot.


international conference on robotics and automation | 2004

An empirical exploration of a neural oscillator for biped locomotion control

Gen Endo; Jun Morimoto; Jun Nakanishi; Gordon Cheng

Humanoid research has made remarkable progress during the past 10 years. However, currently most humanoids use the target ZMP (zero moment point) control algorithm for bipedal locomotion, which requires precise modeling and actuation with high control gains. On the contrary, humans do not rely on such precise modeling and actuation. Our aim is to examine biologically related algorithms for bipedal locomotion that resemble human-like locomotion. This paper describes an empirical study of a neural oscillator for the control of biped locomotion. We propose a new neural oscillator arrangement applied to a compass-like biped robot. Dynamic simulations and experiments with a real biped robot were carried out and the controller performs steady walking for over 50 steps. Gait variations resulting in energy efficiency was made possible through the adjustment of only a single neural activity parameter. Aspects of adaptability and robustness of our approach are shown by allowing the robot to walk over terrains with varying surfaces with different frictional properties. Initial results suggesting optimal amplitude for dealing with perturbation are also presented.


Robotics and Autonomous Systems | 2004

Learning tasks from observation and practice

Darrin C. Bentivegna; Christopher G. Atkeson; Gordon Cheng

Abstract This paper presents a framework that gives robots the ability to initially learn a task behavior from observing others. The framework includes a method for the robots to increase performance while operating in the task environment. We demonstrate this approach applied to air hockey and the marble maze task. Our robots initially learn to perform the tasks using learning from observation, and then increase their performance through practice.

Collaboration


Dive into the Gordon Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ales Ude

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mitsuo Kawato

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge