Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Johnson Cutler is active.

Publication


Featured researches published by Mark Johnson Cutler.


AIAA Guidance, Navigation, and Control Conference | 2011

Comparison of Fixed and Variable Pitch Actuators for Agile Quadrotors

Mark Johnson Cutler; Nazim Kemal Ure; Bernard J. Michini; Jonathan P. How

This paper presents the design, analysis and experimental testing of a variablepitch quadrotor. A custom in-lab built quadrotor with on-board attitude stabilization is developed and tested. An analysis of the dynamic differences in thrust output between a fixed-pitch and variable-pitch propeller is given and validated with simulation and experimental results. It is shown that variable-pitch actuation has significant advantages over the conventional fixed-pitch configuration, including increased thrust rate of change, decreased control saturation, and the ability to quickly and efficiently reverse thrust. These advantages result in improved quadrotor tracking of linear and angular acceleration command inputs in both simulation and hardware testing. The benefits should enable more aggressive and aerobatic flying with the variable-pitch quadrotor than with standard fixed-pitch actuation, while retaining much of the mechanical simplicity and robustness of the fixed-pitch quadrotor.


AIAA Guidance, Navigation, and Control Conference | 2012

Actuator Constrained Trajectory Generation and Control for Variable-Pitch Quadrotors

Mark Johnson Cutler; Jonathan P. How

Control and trajectory generation algorithms for a quadrotor helicopter with variable-pitch propellers are presented. The control law is not based on near-hover assumptions, allowing for large attitude deviations from hover. The trajectory generation algorithm fits a time-parametrized polynomial through any number of waypoints in R, with a closed-form solution if the corresponding waypoint arrival times are known a priori. When time is not specified, an algorithm for finding minimum-time paths subject to hardware actuator saturation limitations is presented. Attitude-specific constraints are easily embedded in the polynomial path formulation, allowing for aerobatic maneuvers to be performed using a single controller and trajectory generation algorithm. Experimental results on a variablepitch quadrotor demonstrate the control design and example trajectories.


international conference on robotics and automation | 2011

Design and flight testing of an autonomous variable-pitch quadrotor

Buddy Michini; Josh Redding; N. Kemal Ure; Mark Johnson Cutler; Jonathan P. How

This video submission presents a design concept of an autonomous variable-pitch quadrotor with constant motor speed. The main aim of this work is to increase the maneuverability of the quadrotor vehicle concept while largely maintaining its mechanical simplicity. This added maneuverability will allow autonomous agile maneuvers like inverted hover and flip. A custom in lab built quadrotor with onboard attitude stabilization is developed and tested in the ACLs (Aerospace Controls Laboratory) RAVEN (Real-time indoor Autonomous Vehicle test ENvironment). Initial flight results show that the quadrotor is capable of waypoint tracking and hovering both upright and inverted.


international conference on robotics and automation | 2015

Decoupled multiagent path planning via incremental sequential convex programming

Yu Fan Chen; Mark Johnson Cutler; Jonathan P. How

This paper presents a multiagent path planning algorithm based on sequential convex programming (SCP) that finds locally optimal trajectories. Previous work using SCP efficiently computes motion plans in convex spaces with no static obstacles. In many scenarios where the spaces are non-convex, previous SCP-based algorithms failed to find feasible solutions because the convex approximation of collision constraints leads to forming a sequence of infeasible optimization problems. This paper addresses this problem by tightening collision constraints incrementally, thus forming a sequence of more relaxed, feasible intermediate optimization problems. We show that the proposed algorithm increases the probability of finding feasible trajectories by 33% for teams of more than three vehicles in non-convex environments. Further, we show that decoupling the multiagent optimization problem to a number of single-agent optimization problems leads to significant improvement in computational tractability. We develop a decoupled implementation of the proposed algorithm, abbreviated dec-iSCP. We show that dec-iSCP runs 14% faster and finds feasible trajectories with higher probability than a decoupled implementation of previous SCP-based algorithms. The proposed algorithm is real-time implementable and is validated through hardware experiments on a team of quadrotors.


international conference on robotics and automation | 2014

Reinforcement learning with multi-fidelity simulators

Mark Johnson Cutler; Thomas J. Walsh; Jonathan P. How

We present a framework for reinforcement learning (RL) in a scenario where multiple simulators are available with decreasing amounts of fidelity to the real-world learning scenario. Our framework is designed to limit the number of samples used in each successively higher-fidelity/cost simulator by allowing the agent to choose to run trajectories at the lowest level that will still provide it with information. The approach transfers state-action Q-values from lower-fidelity models as heuristics for the “Knows What It Knows” family of RL algorithms, which is applicable over a wide range of possible dynamics and reward representations. Theoretical proofs of the frameworks sample complexity are given and empirical results are demonstrated on a remote controlled car with multiple simulators. The approach allows RL algorithms to find near-optimal policies for the real world with fewer expensive real-world samples than previous transfer approaches or learning without simulators.


international conference on robotics and automation | 2013

Scalable reward learning from demonstration

Bernard J. Michini; Mark Johnson Cutler; Jonathan P. How

Reward learning from demonstration is the task of inferring the intents or goals of an agent demonstrating a task. Inverse reinforcement learning methods utilize the Markov decision process (MDP) framework to learn rewards, but typically scale poorly since they rely on the calculation of optimal value functions. Several key modifications are made to a previously developed Bayesian nonparametric inverse reinforcement learning algorithm that avoid calculation of an optimal value function and no longer require discretization of the state or action spaces. Experimental results given demonstrate the ability of the resulting algorithm to scale to larger problems and learn in domains with continuous demonstrations.


AIAA Guidance, Navigation, and Control Conference 2012 | 2012

Experimental Results of Concurrent Learning Adaptive Controllers

Girish Chowdhary; Tongbin Wu; Nazim Kemal Ure; Mark Johnson Cutler; Jonathan P. How

Commonly used Proportional-Integral-Derivative based UAV ight controllers are often seen to provide adequate trajectory-tracking performance only after extensive tuning. The gains of these controllers are tuned to particular platforms, which makes transferring controllers from one UAV to other time-intensive. This paper suggests the use of adaptive controllers in speeding up the process of extracting good control performance from new UAVs. In particular, it is shown that a concurrent learning adaptive controller improves the trajectory tracking performance of a quadrotor with baseline linear controller directly imported from another quadrotors whose inertial characteristics and throttle mapping are very di erent. Concurrent learning adaptive control uses speci cally selected and online recorded data concurrently with instantaneous data and is capable of guaranteeing tracking error and weight error convergence without requiring persistency of excitation. Flight-test results are presented on indoor quadrotor platforms operated in MIT’s RAVEN environment. These results indicate the feasibility of rapidly developing high-performance UAV controllers by using adaptive control to augment a controller transferred from another UAV with similar control assignment structure.


international conference on robotics and automation | 2013

Rapid transfer of controllers between UAVs using learning-based adaptive control

Girish Chowdhary; Tongbin Wu; Mark Johnson Cutler; Jonathan P. How

Commonly used Proportional-Integral-Derivative based UAV flight controllers are often seen to provide adequate trajectory-tracking performance, but only after extensive tuning. The gains of these controllers are tuned to particular platforms, which makes transferring controllers from one UAV to other time-intensive. This paper formulates the problem of control-transfer from a source system to a transfer system and proposes a solution that leverages well-studied techniques in adaptive control. It is shown that concurrent learning adaptive controllers improve the trajectory tracking performance of a quadrotor with the baseline linear controller directly imported from another quadrotor whose inertial characteristics and throttle mapping are very different. Extensive flight-testing, using indoor quadrotor platforms operated in MITs RAVEN environment, is used to validate the method.


IEEE Transactions on Robotics | 2015

Real-World Reinforcement Learning via Multifidelity Simulators

Mark Johnson Cutler; Thomas J. Walsh; Jonathan P. How

Reinforcement learning (RL) can be a tool for designing policies and controllers for robotic systems. However, the cost of real-world samples remains prohibitive as many RL algorithms require a large number of samples before learning useful policies. Simulators are one way to decrease the number of required real-world samples, but imperfect models make deciding when and how to trust samples from a simulator difficult. We present a framework for efficient RL in a scenario where multiple simulators of a target task are available, each with varying levels of fidelity. The framework is designed to limit the number of samples used in each successively higher-fidelity/cost simulator by allowing a learning agent to choose to run trajectories at the lowest level simulator that will still provide it with useful information. Theoretical proofs of the frameworks sample complexity are given and empirical results are demonstrated on a remote-controlled car with multiple simulators. The approach enables RL algorithms to find near-optimal policies in a physical robot domain with fewer expensive real-world samples than previous transfer approaches or learning without simulators.


international conference on robotics and automation | 2015

Efficient reinforcement learning for robots using informative simulated priors

Mark Johnson Cutler; Jonathan P. How

Autonomous learning through interaction with the physical world is a promising approach to designing controllers and decision-making policies for robots. Unfortunately, learning on robots is often difficult due to the large number of samples needed for many learning algorithms. Simulators are one way to decrease the samples needed from the robot by incorporating prior knowledge of the dynamics into the learning algorithm. In this paper we present a novel method for transferring data from a simulator to a robot, using simulated data as a prior for real-world learning. A Bayesian nonparametric prior is learned from a potentially black-box simulator. The mean of this function is used as a prior for the Probabilistic Inference for Learning Control (PILCO) algorithm. The simulated prior improves the convergence rate and performance of PILCO by directing the policy search in areas of the state-space that have not yet been observed by the robot. Simulated and hardware results show the benefits of using the prior knowledge in the learning framework.

Collaboration


Dive into the Mark Johnson Cutler's collaboration.

Top Co-Authors

Avatar

Jonathan P. How

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nazim Kemal Ure

Istanbul Technical University

View shared research outputs
Top Co-Authors

Avatar

Bernard J. Michini

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yu Fan Chen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

N. Kemal Ure

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tongbin Wu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Buddy Michini

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Josh Redding

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge