Drew Bagnell
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Drew Bagnell.
international conference on robotics and automation | 2005
Jeff G. Schneider; David Apfelbaum; Drew Bagnell; Reid G. Simmons
Direct human control of multi-robot systems is limited by the cognitive ability of humans to coordinate numerous interacting components. In remote environments, such as those encountered during planetary or ocean exploration, a further limit is imposed by communication bandwidth and delay. Market based planning can give humans a higher-level interface to multi-robot systems in these scenarios. Operators provide high level tasks and attach a reward to the achievement of each task. The robots then trade these tasks through a market based mechanism. The challenge for the system designer is to create bidding algorithms for the robots that yield high overall system performance. Opportunity cost provides a nice basis for such bidding algorithms since it encapsulates all the costs and benefits we are interested in. Unfortunately, computing it can be difficult. We propose a method of learning opportunity costs in market based planners. We provide analytic results in simplified scenarios and empirical results on our FIRE simulator, which focuses on exploration of Mars by multiple, heterogeneous rovers.
Proceedings of SPIE | 2013
Arne Suppé; Luis E. Navarro-Serment; Daniel Munoz; Drew Bagnell; Martial Hebert
We describe an architecture to provide online semantic labeling capabilities to field robots operating in urban environments. At the core of our system is the stacked hierarchical classifier developed by Munoz et al., which classifies regions in monocular color images using models derived from hand labeled training data. The classifier is trained to identify buildings, several kinds of hard surfaces, grass, trees, and sky. When taking this algorithm into the real world, practical concerns with difficult and varying lighting conditions require careful control of the imaging process. First, camera exposure is controlled by software, examining all of the images pixels, to compensate for the poorly performing, simplistic algorithm used on the camera. Second, by merging multiple images taken with different exposure times, we are able to synthesize images with higher dynamic range than the ones produced by the sensor itself. The sensor s limited dynamic range makes it difficult to, at the same time, properly expose areas in shadow along with high albedo surfaces that are directly illuminated by the sun. Texture is a key feature used by the classifier, and under /over exposed regions lacking texture are a leading cause of misclassifications. The results of the classifier are shared with higher-lev elements operating in the UGV in order to perform tasks such as building identification from a distance and finding traversable surfaces.
The International Journal of Robotics Research | 2018
Jiaji Zhou; Matthew T. Mason; Robert Paolini; Drew Bagnell
We propose a polynomial model for planar sliding mechanics. For the force–motion mapping, we treat the set of generalized friction loads as the 1-sublevel set of a polynomial whose gradient directions correspond to generalized velocities. The polynomial is confined to be convex even-degree homogeneous in order to obey the maximum work inequality, symmetry, shape invariance in scale, and fast invertibility. We present a simple and statistically efficient model identification procedure using a sum-of-squares convex relaxation. We then derive the kinematic contact model that resolves the contact modes and instantaneous object motion given a position controlled manipulator action. The inherently stochastic object-to-surface friction distributions are modeled by sampling polynomial parameters from distributions that preserve sum-of-squares convexity. Thanks to the model smoothness, the mechanics of patch contact is captured while being computationally efficient without mode selection at support points. Simulation and robotic experiments on pushing and grasping validate the accuracy and efficiency of our approach.
Journal of Field Robotics | 2008
Chris Urmson; Joshua Anhalt; Drew Bagnell; Christopher R. Baker; Robert Bittner; M. N. Clark; John M. Dolan; Dave Duggins; Tugrul Galatali; Christopher Geyer; Michele Gittleman; Sam Harbaugh; Martial Hebert; Thomas M. Howard; Sascha Kolski; Alonzo Kelly; Maxim Likhachev; Matthew McNaughton; Nicholas Miller; Kevin M. Peterson; Brian Pilnick; Raj Rajkumar; Paul E. Rybski; Bryan Salesky; Young-Woo Seo; Sanjiv Singh; Jarrod M. Snider; Anthony Stentz; Ziv Wolkowicki; Jason Ziglar
international conference on artificial intelligence and statistics | 2010
Stéphane Ross; Drew Bagnell
international conference on artificial intelligence and statistics | 2012
Alexander Grubb; Drew Bagnell
international conference on machine learning | 2012
Stéphane Ross; Drew Bagnell
international conference on machine learning | 2011
Kevin Waugh; Drew Bagnell; Brian D. Ziebart
international conference on artificial intelligence and statistics | 2014
Shervin Javdani; Yuxin Chen; Amin Karbasi; Andreas Krause; Drew Bagnell; Siddhartha S. Srinivasa
robotics: science and systems | 2013
Dov Katz; Arun Venkatraman; Moslem Kazemi; Drew Bagnell; Anthony Stentz