Darrin C. Bentivegna
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Darrin C. Bentivegna.
international conference on robotics and automation | 2006
Jun Morimoto; Gen Endo; Jun Nakanishi; Sang-Ho Hyon; Gordon Cheng; Darrin C. Bentivegna; Christopher G. Atkeson
We show that a humanoid robot can step and walk using simple sinusoidal desired joint trajectories with their phase adjusted by a coupled oscillator model. We use the center of pressure location and velocity to detect the phase of the lateral robot dynamics. This phase information is used to modulate the desired joint trajectories. We applied the proposed control approach to our newly developed human sized humanoid robot and a small size humanoid robot developed by Sony, enabling them to generate successful stepping and walking patterns
International Journal of Humanoid Robotics | 2004
Darrin C. Bentivegna; Christopher G. Atkeson; Ales Ude; Gordon Cheng
We present a method for humanoid robots to quickly learn new dynamic tasks from observing others and from practice. Ways in which the robot can adapt to initial and also changing conditions are described. Agents are given domain knowledge in the form of task primitives. A key element of our approach is to break learning problems up into as many simple learning problems as possible. We present a case study of a humanoid robot learning to play air hockey.
ieee-ras international conference on humanoid robots | 2007
Darrin C. Bentivegna; Christopher G. Atkeson; Jung Yup Kim
This paper presents an analysis of a hydraulic joint on a humanoid robot. Various controllers have been designed that allow the limb to have a range of characteristics such as being stiff or compliant.
intelligent robots and systems | 2002
Darrin C. Bentivegna; Ales Ude; Christopher G. Atkeson; Gordon Cheng
This paper describes humanoid robot learning from observation and game playing using information provided by a real-time PC-based vision system. To cope with extremely fast motions that arise in the environment, a visual system capable of perceiving the motion of several objects at 60 fields per second was developed. We have designed a suitable error recovery scheme for our vision system to ensure successful game playing over longer periods of time. To increase the learning rate of the robot it is given domain knowledge in the form of primitives. The robot learns how to perform primitives from data collected while observing a human. The robot control system and primitive use strategy are also explained.
robot soccer world cup | 2002
Darrin C. Bentivegna; Christopher G. Atkeson
This paper describes a method to learn task primitives from observation. A framework has been developed that allows an agent to use observed data to initially learn a predefined set of task primitives and the conditions under which they are used. A method is also included for the agent to increase its performance while operating in the environment. Data that is collected while a human performs a task is parsed into small parts of the task called primitives. Modules are created for each primitive that encode the movements required during the performance of the primitive, and when and where the primitives are performed.
ISRR | 2005
Darrin C. Bentivegna; Gordon Cheng; Christopher G. Atkeson
We describe a memory-based approach to learning how to select and provide sub-goals for behavioral primitives, given an existing library of primitives. We demonstrate both learning from observation and learning from practice on a marble maze task, Labyrinth.
intelligent robots and systems | 2003
Darrin C. Bentivegna; Christopher G. Atkeson; Gordon Cheng
This paper focuses on learning to select behavioral primitives and generate sub-goals from practicing a task. We present a novel algorithm that combines Q-learning and a locally weighted learning method to improve primitive selection and sub-goal generation. We demonstrate this approach applied to the tilt maze task. Our robot initially learns to perform this task using learning from observation, and then learns from practice.
Intelligent Systems & Advanced Manufacturing | 1998
Darrin C. Bentivegna; Kahled S. Ali; Ronald C. Arkin; Tucker R. Balch
Autonomous and semi-autonomous full-sized ground vehicles are becoming increasingly important, particularly in military applications. Here we describe the instrumentation of one such vehicle, a 4-wheel drive Hummer, for autonomous robotic operation. Actuators for steering, brake, and throttle have been implemented on a commercially available Hummer. Control is provided by on-board and remote computation. On-board computation includes a PC-based control computer coupled to feedback sensors for he steering wheel, brake, and forward speed; and a Unix workstation for high-level control. A radio link connects the on- board computers to an operators remote workstation running the Georgia Tech MissionLab system. The paper describes the design and implementation of this integrated hardware/software system that translates a remote human operators commands into directed motion of the vehicle telerobotic control of the Hummer has been demonstrated in outdoor experiments.
Archive | 2000
Darrin C. Bentivegna; Christopher G. Atkeson
Archive | 2002
Darrin C. Bentivegna; Christopher G. Atkeson