Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Kuindersma is active.

Publication


Featured researches published by Scott Kuindersma.


Autonomous Robots | 2016

Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot

Scott Kuindersma; Robin Deits; Maurice Fallon; Andrés Valenzuela; Hongkai Dai; Frank Noble Permenter; Twan Koolen; Pat Marion; Russ Tedrake

This paper describes a collection of optimization algorithms for achieving dynamic planning, control, and state estimation for a bipedal robot designed to operate reliably in complex environments. To make challenging locomotion tasks tractable, we describe several novel applications of convex, mixed-integer, and sparse nonlinear optimization to problems ranging from footstep placement to whole-body planning and control. We also present a state estimator formulation that, when combined with our walking controller, permits highly precise execution of extended walking plans over non-flat terrain. We describe our complete system integration and experiments carried out on Atlas, a full-size hydraulic humanoid robot built by Boston Dynamics, Inc.


The International Journal of Robotics Research | 2012

Robot learning from demonstration by constructing skill trees

George Konidaris; Scott Kuindersma; Roderic A. Grupen; Andrew G. Barto

We describe CST, an online algorithm for constructing skill trees from demonstration trajectories. CST segments a demonstration trajectory into a chain of component skills, where each skill has a goal and is assigned a suitable abstraction from an abstraction library. These properties permit skills to be improved efficiently using a policy learning algorithm. Chains from multiple demonstration trajectories are merged into a skill tree. We show that CST can be used to acquire skills from human demonstration in a dynamic continuous domain, and from both expert demonstration and learned control sequences on the uBot-5 mobile manipulator.


international conference on robotics and automation | 2014

An efficiently solvable quadratic program for stabilizing dynamic locomotion

Scott Kuindersma; Frank Noble Permenter; Russ Tedrake

We describe a whole-body dynamic walking controller implemented as a convex quadratic program. The controller solves an optimal control problem using an approximate value function derived from a simple walking model while respecting the dynamic, input, and contact constraints of the full robot dynamics. By exploiting sparsity and temporal structure in the optimization with a custom active-set algorithm, we surpass the performance of the best available off-the-shelf solvers and achieve 1kHz control rates for a 34-DOF humanoid. We describe applications to balancing and walking tasks using the simulated Atlas robot in the DARPA Virtual Robotics Challenge.


Journal of Field Robotics | 2015

An Architecture for Online Affordance-based Perception and Whole-body Planning

Maurice Fallon; Scott Kuindersma; Sisir Karumanchi; Matthew E. Antone; Toby Schneider; Hongkai Dai; Claudia Pérez D'Arpino; Robin Deits; Matt DiCicco; Dehann Fourie; Twan Koolen; Pat Marion; Michael Posa; Andrés Valenzuela; Kuan-Ting Yu; Julie A. Shah; Karl Iagnemma; Russ Tedrake; Seth J. Teller

The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robots sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation, and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule.


international conference on robotics and automation | 2016

Optimization and stabilization of trajectories for constrained dynamical systems

Michael Posa; Scott Kuindersma; Russ Tedrake

Contact constraints, such as those between a foot and the ground or a hand and an object, are inherent in many robotic tasks. These constraints define a manifold of feasible states; while well understood mathematically, they pose numerical challenges to many algorithms for planning and controlling whole-body dynamic motions. In this paper, we present an approach to the synthesis and stabilization of complex trajectories for both fully-actuated and underactuated robots subject to contact constraints. We introduce a trajectory optimization algorithm (DIRCON) that extends the direct collocation method, naturally incorporating manifold constraints to produce a nominal trajectory with third-order integration accuracy-a critical feature for achieving reliable tracking control. We adapt the classical time-varying linear quadratic regulator to produce a local cost-to-go in the manifold tangent plane. Finally, we descend the cost-to-go using a quadratic program that incorporates unilateral friction and torque constraints. This approach is demonstrated on three complex walking and climbing locomotion examples in simulation.


Journal of Neurophysiology | 2008

Recovery From Monocular Deprivation Using Binocular Deprivation

Brian S. Blais; Mikhail Y. Frenkel; Scott Kuindersma; Rahmat Muhammad; Harel Z. Shouval; Leon N. Cooper; Mark F. Bear

Ocular dominance (OD) plasticity is a robust paradigm for examining the functional consequences of synaptic plasticity. Previous experimental and theoretical results have shown that OD plasticity can be accounted for by known synaptic plasticity mechanisms, using the assumption that deprivation by lid suture eliminates spatial structure in the deprived channel. Here we show that in the mouse, recovery from monocular lid suture can be obtained by subsequent binocular lid suture but not by dark rearing. This poses a significant challenge to previous theoretical results. We therefore performed simulations with a natural input environment appropriate for mouse visual cortex. In contrast to previous work, we assume that lid suture causes degradation but not elimination of spatial structure, whereas dark rearing produces elimination of spatial structure. We present experimental evidence that supports this assumption, measuring responses through sutured lids in the mouse. The change in assumptions about the input environment is sufficient to account for new experimental observations, while still accounting for previous experimental results.


The International Journal of Robotics Research | 2013

Variable risk control via stochastic optimization

Scott Kuindersma; Roderic A. Grupen; Andrew G. Barto

We present new global and local policy search algorithms suitable for problems with policy-dependent cost variance (or risk), a property present in many robot control tasks. These algorithms exploit new techniques in non-parametric heteroscedastic regression to directly model the policy-dependent distribution of cost. For local search, the learned cost model can be used as a critic for performing risk-sensitive gradient descent. Alternatively, decision-theoretic criteria can be applied to globally select policies to balance exploration and exploitation in a principled way, or to perform greedy minimization with respect to various risk-sensitive criteria. This separation of learning and policy selection permits variable risk control, where risk-sensitivity can be flexibly adjusted and appropriate policies can be selected at runtime without relearning. We describe experiments in dynamic stabilization and manipulation with a mobile manipulator that demonstrate learning of flexible, risk-sensitive policies in very few trials.


Springer Handbook of Robotics, 2nd Ed. | 2016

Modeling and Control of Legged Robots

Pierre-Brice Wieber; Russ Tedrake; Scott Kuindersma

The promise of legged robots over wheeled robots is to provide improved mobility over rough terrain. Unfortunately, this promise comes at the cost of a significant increase in complexity. We now have a good understanding of how to make legged robots walk and run dynamically, but further research is still necessary to make them walk and run efficiently in terms of energy, speed, reactivity, versatility, and robustness. In this chapter, we will discuss how legged robots are usually modeled, how their stability analysis is approached, how dynamic motions are generated and controlled, and finally summarize the current trends in trying to improve their performance. The main problem is avoiding to fall. This can prove difficult since legged robots have to rely entirely on available contact forces to do so. The temporality of leg motions appears to be a key aspect in this respect, as current control solutions include continuous anticipation of future motion (using some form of model predictive control), or focusing more specifically on limit cycles and orbital stability.


ieee-ras international conference on humanoid robots | 2011

Learning dynamic arm motions for postural recovery

Scott Kuindersma; Roderic A. Grupen; Andrew G. Barto

The biomechanics community has recently made progress toward understanding the role of rapid arm movements in human stability recovery. However, comparatively little work has been done exploring this type of control in humanoid robots. We provide a summary of recent insights into the functional contributions of arm recovery motions in humans and experimentally demonstrate advantages of this behavior on a dynamically stable mobile manipulator. Using Bayesian optimization, the robot efficiently discovers policies that reduce total energy expenditure and recovery footprint, and increase ability to stabilize after large impacts.


robotics: science and systems | 2012

Variational Bayesian Optimization for Runtime Risk-Sensitive Control.

Scott Kuindersma; Roderic A. Grupen; Andrew G. Barto

We present a new Bayesian policy search algorithm suitable for problems with policy-dependent cost variance, a property present in many robot control tasks. We extend recent work on variational heteroscedastic Gaussian processes to the optimization case to achieve efficient minimization of very noisy cost signals. In contrast to most policy search algorithms, our method explicitly models the cost variance in regions of low expected cost and permits runtime adjustment of risk sensitivity without relearning. Our experiments with artificial systems and a real mobile manipulator demonstrate that flexible risk-sensitive policies can be learned in very few trials.

Collaboration


Dive into the Scott Kuindersma's collaboration.

Top Co-Authors

Avatar

Roderic A. Grupen

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Barto

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Russ Tedrake

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robin Deits

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrés Valenzuela

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hongkai Dai

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maurice Fallon

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian S. Blais

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge