Thomas M. Howard
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas M. Howard.
The International Journal of Robotics Research | 2007
Thomas M. Howard; Alonzo Kelly
An algorithm is presented for wheeled mobile robot trajectory generation that achieves a high degree of generality and efficiency. The generality derives from numerical linearization and inversion of forward models of propulsion, suspension, and motion for any type of vehicle. Efficiency is achieved by using fast numerical optimization techniques and effective initial guesses for the vehicle controls parameters. This approach can accommodate such effects as rough terrain, vehicle dynamics, models of wheel-terrain interaction, and other effects of interest. It can accommodate boundary and internal constraints while optimizing an objective function that might, for example, involve such criteria as obstacle avoidance, cost, risk, time, or energy consumption in any combination. The algorithm is efficient enough to use in real time due to its use of nonlinear programming techniques that involve searching the space of parameterized vehicle controls. Applications of the presented methods are demonstrated for planetary rovers.
intelligent robots and systems | 2008
Dave Ferguson; Thomas M. Howard; Maxim Likhachev
We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultra-reliability, high-speed operation, complex inter-vehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically-feasible actions with two higher-level planners for generating long range plans in both on-road and unstructured areas of the environment. In this Part I of a two-part paper, we describe the underlying trajectory generator and the on-road planning component of this system. We provide examples and results from ldquoBossrdquo, an autonomous SUV that has driven itself over 3000 kilometers and competed in, and won, the Urban Challenge.
international conference on robotics and automation | 2012
Nicolas Hudson; Thomas M. Howard; Jeremy Ma; Abhinandan Jain; Max Bajracharya; Steven Myint; Calvin Kuo; Larry H. Matthies; Paul G. Backes; Paul Hebert; Thomas J. Fuchs; Joel W. Burdick
This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and in independent DARPA testing archived the most successfully completed tasks with the fastest average task execution of any evaluated team.
international conference on robotics and automation | 2014
Thomas M. Howard; Stefanie Tellex; Nicholas Roy
Natural language interfaces for robot control aspire to find the best sequence of actions that reflect the behavior intended by the instruction. This is difficult because of the diversity of language, variety of environments, and heterogeneity of tasks. Previous work has demonstrated that probabilistic graphical models constructed from the parse structure of natural language can be used to identify motions that most closely resemble verb phrases. Such approaches however quickly succumb to computational bottlenecks imposed by construction and search the space of possible actions. Planning constraints, which define goal regions and separate the admissible and inadmissible states in an environment model, provide an interesting alternative to represent the meaning of verb phrases. In this paper we present a new model called the Distributed Correspondence Graph (DCG) to infer the most likely set of planning constraints from natural language instructions. A trajectory planner then uses these planning constraints to find a sequence of actions that resemble the instruction. Separating the problem of identifying the action encoded by the language into individual steps of planning constraint inference and motion planning enables us to avoid computational costs associated with generation and evaluation of many trajectories. We present experimental results from comparative experiments that demonstrate improvements in efficiency in natural language understanding without loss of accuracy.
field and service robotics | 2010
Thomas M. Howard; Colin J. Green; Alonzo Kelly
As mobile robots venture into more difficult environments, more complex state-space paths are required to move safely and efficiently. The difference between mission success and failure can be determined by a mobile robots capacity to effectively navigate such paths in the presence of disturbances. This paper describes a technique for mobile robot model predictive control that utilizes the structure of a regionalmotion plan to effectively search the local continuum for an improved solution. The contribution, a receding horizon model-predictive control (RHMPC) technique, specifically addresses the problem of path following and obstacle avoidance through geometric singularities and discontinuities such as cusps, turn-in-place, and multi-point turn maneuvers in environments where terrain shape and vehicle mobility effects are non-negligible. The technique is formulated as an optimal controller that utilizes a model-predictive trajectory generator to relax parameterized control inputs initialized from a regional motion planner to navigate safely through the environment. Experimental results are presented for a six-wheeled skid-steered field robot in natural terrain.
IEEE Robotics & Automation Magazine | 2014
Thomas M. Howard; Mihail Pivtoraiko; Ross A. Knepper; Alonzo Kelly
A necessary attribute of a mobile robot planning algorithm is the ability to accurately predict the consequences of robot actions to make informed decisions about where and how to drive. It is also important that such methods are efficient, as onboard computational resources are typically limited and fast planning rates are often required. In this article, we present several practical mobile robot motion planning algorithms for local and global search, developed with a common underlying trajectory generation framework for use in model-predictive control. These techniques all center on the idea of generating informed, feasible graphs at scales and resolutions that respect computational and temporal constraints of the application. Connectivity in these graphs is provided by a trajectory generator that searches in a parameterized space of robot inputs subject to an arbitrary predictive motion model. Local search graphs connect the currently observed state-to-states at or near the planning or perception horizon. Global search graphs repeatedly expand a precomputed trajectory library in a uniformly distributed state lattice to form a recombinant search space that respects differential constraints. In this article, we discuss the trajectory generation algorithm, methods for online or offline calibration of predictive motion models, sampling strategies for local search graphs that exploit global guidance and environmental information for real-time obstacle avoidance and navigation, and methods for efficient design of global search graphs with attention to optimality, feasibility, and computational complexity of heuristic search. The model-invariant nature of our approach to local and global motions planning has enabled a rapid and successful application of these techniques to a variety of platforms. Throughout the article, we also review experiments performed on planetary rovers, field robots, mobile manipulators, and autonomous automobiles and discuss future directions of the article.
international conference on robotics and automation | 2015
Sachithra Hemachandra; Felix Duvallet; Thomas M. Howard; Nicholas Roy; Anthony Stentz; Matthew R. Walter
Natural language offers an intuitive and flexible means for humans to communicate with the robots that we will increasingly work alongside in our homes and workplaces. Recent advancements have given rise to robots that are able to interpret natural language manipulation and navigation commands, but these methods require a prior map of the robots environment. In this paper, we propose a novel learning framework that enables robots to successfully follow natural language route directions without any previous knowledge of the environment. The algorithm utilizes spatial and semantic information that the human conveys through the command to learn a distribution over the metric and semantic properties of spatially extended environments. Our method uses this distribution in place of the latent world model and interprets the natural language instruction as a distribution over the intended behavior. A novel belief space planner reasons directly over the map and behavior distributions to solve for a policy using imitation learning. We evaluate our framework on a voice-commandable wheelchair. The results demonstrate that by learning and performing inference over a latent environment model, the algorithm is able to successfully follow natural language route directions within novel, extended environments.
international conference on robotics and automation | 2013
Paul Hebert; Thomas M. Howard; Nicolas Hudson; Jeremy Ma; Joel W. Burdick
This paper introduces a tactile or contact method whereby an autonomous robot equipped with suitable sensors can choose the next sensing action involving touch in order to accurately localize an object in its environment. The method uses an information gain metric based on the uncertainty of the objects pose to determine the next best touching action. Intuitively, the optimal action is the one that is the most informative. The action is then carried out and the state of the objects pose is updated using an estimator. The method is further extended to choose the most informative action to simultaneously localize and estimate the objects model parameter or model class. Results are presented both in simulation and in experiment on the DARPA Autonomous Robotic Manipulation Software (ARM-S) robot.
international symposium on experimental robotics | 2016
Felix Duvallet; Matthew R. Walter; Thomas M. Howard; Sachithra Hemachandra; Jean Oh; Seth J. Teller; Nicholas Roy; Anthony Stentz
Natural language provides a flexible, intuitive way for people to command robots, which is becoming increasingly important as robots transition to working alongside people in our homes and workplaces. To follow instructions in unknown environments, robots will be expected to reason about parts of the environments that were described in the instruction, but that the robot has no direct knowledge about. However, most existing approaches to natural language understanding require that the robot’s environment be known a priori. This paper proposes a probabilistic framework that enables robots to follow commands given in natural language, without any prior knowledge of the environment. The novelty lies in exploiting environment information implicit in the instruction, thereby treating language as a type of sensor that is used to formulate a prior distribution over the unknown parts of the environment. The algorithm then uses this learned distribution to infer a sequence of actions that are most consistent with the command, updating our belief as we gather more metric information. We evaluate our approach through simulation as well as experiments on two mobile robots; our results demonstrate the algorithm’s ability to follow navigation commands with performance comparable to that of a fully-known environment.
international conference on robotics and automation | 2012
Paul Hebert; Nicolas Hudson; Jeremy Ma; Thomas M. Howard; Thomas J. Fuchs; Max Bajracharya; Joel W. Burdick
This paper develops an estimation framework for sensor-guided manipulation of a rigid object via a robot arm. Using an unscented Kalman Filter (UKF), the method combines dense range information (from stereo cameras and 3D ranging sensors) as well as visual appearance features and silhouettes of the object and manipulator to track both an object-fixed frame location as well as a manipulator tool or palm frame location. If available, tactile data is also incorporated. By using these different imaging sensors and different imaging properties, we can leverage the advantages of each sensor and each feature type to realize more accurate and robust object and reference frame tracking. The method is demonstrated using the DARPA ARM-S system, consisting of a Barrett™WAM manipulator.