Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Andrew Bagnell is active.

Publication


Featured researches published by James Andrew Bagnell.


european conference on computer vision | 2012

Activity forecasting

Kris M. Kitani; Brian D. Ziebart; James Andrew Bagnell; Martial Hebert

We address the task of inferring the future actions of people from noisy visual input. We denote this task activity forecasting. To achieve accurate activity forecasting, our approach models the effect of the physical environment on the choice of human actions. This is accomplished by the use of state-of-the-art semantic scene understanding combined with ideas from optimal control theory. Our unified model also integrates several other key elements of activity analysis, namely, destination forecasting, sequence smoothing and transfer learning. As proof-of-concept, we focus on the domain of trajectory-based activity analysis from visual input. Experimental results demonstrate that our model accurately predicts distributions over future actions of individuals. We show how the same techniques can improve the results of tracking algorithms by leveraging information about likely goals and trajectories.


international conference on robotics and automation | 2001

Autonomous helicopter control using reinforcement learning policy search methods

James Andrew Bagnell; Jeff G. Schneider

Many control problems in the robotics field can be cast as partially observed Markovian decision problems (POMDPs), an optimal control formalism. Finding optimal solutions to such problems in general, however is known to be intractable. It has often been observed that in practice, simple structured controllers suffice for good sub-optimal control, and recent research in the artificial intelligence community has focused on policy search methods as techniques for finding sub-optimal controllers when such structured controllers do exist. Traditional model-based reinforcement learning algorithms make a certainty equivalence assumption on their learned models and calculate optimal policies for a maximum-likelihood Markovian model. We consider algorithms that evaluate and synthesize controllers under distributions of Markovian models. Previous work has demonstrated that algorithms that maximize mean reward with respect to model uncertainty leads to safer and more robust controllers. We consider briefly other performance criterion that emphasize robustness and exploration in the search for controllers, and note the relation with experiment design and active learning. To validate the power of the approach on a robotic application we demonstrate the presented learning control algorithm by flying an autonomous helicopter. We show that the controller learned is robust and delivers good performance in this real-world domain.


european conference on computer vision | 2014

Pose Machines: Articulated Pose Estimation via Inference Machines

Varun Ramakrishna; Daniel Munoz; Martial Hebert; James Andrew Bagnell; Yaser Sheikh

State-of-the-art approaches for articulated human pose estimation are rooted in parts-based graphical models. These models are often restricted to tree-structured representations and simple parametric potentials in order to enable tractable inference. However, these simple dependencies fail to capture all the interactions between body parts. While models with more complex interactions can be defined, learning the parameters of these models remains challenging with intractable or approximate inference. In this paper, instead of performing inference on a learned graphical model, we build upon the inference machine framework and present a method for articulated human pose estimation. Our approach incorporates rich spatial interactions among multiple parts and information across parts of different scales. Additionally, the modular framework of our approach enables both ease of implementation without specialized optimization solvers, and efficient inference. We analyze our approach on two challenging datasets with large pose variation and outperform the state-of-the-art on these benchmarks.


international conference on robotics and automation | 2007

Vegetation Detection for Driving in Complex Environments

David M. Bradley; Ranjith Unnikrishnan; James Andrew Bagnell

A key challenge for autonomous navigation in cluttered outdoor environments is the reliable discrimination between obstacles that must be avoided at all costs, and lesser obstacles which the robot can drive over if necessary. Chlorophyll-rich vegetation in particular is often not an obstacle to a capable off-road vehicle, and it has long been recognized in the satellite imaging community that a simple comparison of the red and near-infrared (NIR) reflectance of a material provides a reliable technique for measuring chlorophyll content in natural scenes. This paper evaluates the effectiveness of using this chlorophyll-detection technique to improve autonomous navigation in natural, off-road environments. We demonstrate through extensive experiments that this feature has properties complementary to the color and shape descriptors traditionally used for point cloud analysis, and show significant improvement in classification performance for tasks relevant to outdoor navigation. Results are shown from field testing onboard a robot operating in off-road terrain.


ieee-ras international conference on humanoid robots | 2007

Imitation learning for locomotion and manipulation

Nathan D. Ratliff; James Andrew Bagnell; Siddhartha S. Srinivasa

Decision making in robotics often involves computing an optimal action for a given state, where the space of actions under consideration can potentially be large and state dependent. Many of these decision making problems can be naturally formalized in the multiclass classification framework, where actions are regarded as labels for states. One powerful approach to multiclass classification relies on learning a function that scores each action; action selection is done by returning the action with maximum score. In this work, we focus on two imitation learning problems in particular that arise in robotics. The first problem is footstep prediction for quadruped locomotion, in which the system predicts next footstep locations greedily given the current four-foot configuration of the robot over a terrain height map. The second problem is grasp prediction, in which the system must predict good grasps of complex free-form objects given an approach direction for a robotic hand. We present experimental results of applying a recently developed functional gradient technique for optimizing a structured margin formulation of the corresponding large non-linear multiclass classification problems.


IEEE Robotics & Automation Magazine | 2010

Learning for Autonomous Navigation

James Andrew Bagnell; David M. Bradley; David Silver; Boris Sofman; Anthony Stentz

Autonomous navigation by a mobile robot through L natural, unstructured terrain is one of the premier k challenges in field robotics. Tremendous advances V in autonomous navigation have been made recently in field robotics. Machine learning has played an increasingly important role in these advances. The Defense Advanced Research Projects Agency (DARPA) UGCV-Perceptor Integration (UPI) program was conceived to take a fresh approach to all aspects of autonomous outdoor mobile robot design, from vehicle design to the design of perception and control systems with the goal of achieving a leap in performance to enable the next generation of robotic applications in commercial, industrial, and military applications. The essential problem addressed by the UPI program is to enable safe autonomous traverse of a robot from Point A to Point B in the least time possible given a series of waypoints in complex, unstructured terrain separated by 0.2-2 km. To accomplish this goal, machine learning techniques were heavily used to provide robust and adaptive performance, while simultaneously reducing the required development and deployment time. This article describes the autonomous system, Crusher, developed for the UPI program and the learning approaches that aided in its successful performance.


robotics science and systems | 2008

High Performance Outdoor Navigation from Overhead Data using Imitation Learning

David Silver; James Andrew Bagnell; Anthony Stentz

High performance, long-distance autonomous navigation is a central problem for field robotics. Efficient navigation relies not only upon intelligent onboard systems for perception and planning, but also the effective use of prior maps and knowledge. While the availability and quality of low cost, high resolution satellite and aerial terrain data continues to rapidly improve, automated interpretation appropriate for robot planning and navigation remains difficult. Recently, a class of machine learning techniques have been developed that rely upon expert human demonstration to develop a function mapping overhead data to traversal cost. These algorithms choose the cost function so that planner behavior mimics an expert’s demonstration as closely as possible. In this work, we extend these methods to automate interpretation of overhead data. We address key challenges, including interpolation-based planners, non-linear approximation techniques, and imperfect expert demonstration, necessary to apply these methods for learning to search for effective terrain interpretations. We validate our approach on a large scale outdoor robot during over 300 kilometers of autonomous traversal through complex natural environments.


international conference on robotics and automation | 2008

Adaptive workspace biasing for sampling-based planners

Matthew Zucker; James J. Kuffner; James Andrew Bagnell

The widespread success of sampling-based planning algorithms stems from their ability to rapidly discover the connectivity of a configuration space. Past research has found that non-uniform sampling in the configuration space can significantly outperform uniform sampling; one important strategy is to bias the sampling distribution based on features present in the underlying workspace. In this paper, we unite several previous approaches to workspace biasing into a general framework for automatically discovering useful sampling distributions. We present a novel algorithm, based on the REINFORCE family of stochastic policy gradient algorithms, which automatically discovers a locally-optimal weighting of workspace features to produce a distribution which performs well for a given class of sampling-based motion planning queries. We present as well a novel set of workspace features that our adaptive algorithm can leverage for improved configuration space sampling. Experimental results show our algorithm to be effective across a variety of robotic platforms and high- dimensional configuration spaces.


intelligent robots and systems | 2006

Experimental Analysis of Overhead Data Processing To Support Long Range Navigation

David Silver; Boris Sofman; Nicolas Vandapel; James Andrew Bagnell; Anthony Stentz

Long range navigation by unmanned ground vehicles continues to challenge the robotics community. Efficient navigation requires not only intelligent on-board perception and planning systems, but also the effective use of prior knowledge of the vehicles environment. This paper describes a system for supporting unmanned ground vehicle navigation through the use of heterogeneous overhead data. Semantic information is obtained through supervised classification, and vehicle mobility is predicted from available geometric data. This approach is demonstrated and validated through over 50 kilometers of autonomous traversal through complex natural environments


european conference on computer vision | 2012

Co-inference for multi-modal scene analysis

Daniel Munoz; James Andrew Bagnell; Martial Hebert

We address the problem of understanding scenes from multiple sources of sensor data (e.g., a camera and a laser scanner) in the case where there is no one-to-one correspondence across modalities (e.g., pixels and 3-D points). This is an important scenario that frequently arises in practice not only when two different types of sensors are used, but also when the sensors are not co-located and have different sampling rates. Previous work has addressed this problem by restricting interpretation to a single representation in one of the domains, with augmented features that attempt to encode the information from the other modalities. Instead, we propose to analyze all modalities simultaneously while propagating information across domains during the inference procedure. In addition to the immediate benefit of generating a complete interpretation in all of the modalities, we demonstrate that this co-inference approach also improves performance over the canonical approach.

Collaboration


Dive into the James Andrew Bagnell's collaboration.

Top Co-Authors

Avatar

David M. Bradley

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brian D. Ziebart

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Anind K. Dey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David LaRose

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David Silver

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kris M. Kitani

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge