Chris J. Ostafew
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris J. Ostafew.
international conference on robotics and automation | 2014
Chris J. Ostafew; Angela P. Schoellig; Timothy D. Barfoot
This paper presents a Learning-based Nonlinear Model Predictive Control (LB-NMPC) algorithm for an au- tonomous mobile robot to reduce path-tracking errors over repeated traverses along a reference path. The LB-NMPC algorithm uses a simple a priori vehicle model and a learned disturbance model. Disturbances are modelled as a Gaussian Process (GP) based on experience collected during previous traversals as a function of system state, input and other relevant variables. Modelling the disturbance as a GP enables interpolation and extrapolation of learned disturbances, a key feature of this algorithm. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, GPS-denied environ- ments. The paper presents experimental results including over 1.8 km of travel by a four-wheeled, 50 kg robot travelling through challenging terrain (including steep, uneven hills) and by a six-wheeled, 160 kg robot learning disturbances caused by unmodelled dynamics at speeds ranging from 0.35 m/s to 1.0 m/s. The speed is scheduled to balance trial time, path- tracking errors, and localization reliability based on previous experience. The results show that the system can start from a generic a priori vehicle model and subsequently learn to reduce vehicle- and trajectory-specific path-tracking errors based on experience.
intelligent robots and systems | 2013
Chris J. Ostafew; Angela P. Schoellig; Timothy D. Barfoot
This paper presents a path-repeating, mobile robot controller that combines a feedforward, proportional Iterative Learning Control (ILC) algorithm with a feedback-linearized path-tracking controller to reduce path-tracking errors over repeated traverses along a reference path. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, GPS-denied, extreme environments. The paper presents experimental results including over 600 m of travel by a four-wheeled, 50 kg robot travelling through challenging terrain including steep hills and sandy turns and by a six-wheeled, 160 kg robot at gradually-increased speeds up to three times faster than the nominal, safe speed. In the absence of a global localization system, ILC is demonstrated to reduce path-tracking errors caused by unmodelled robot dynamics and terrain challenges.
Journal of Field Robotics | 2016
Chris J. Ostafew; Angela P. Schoellig; Timothy D. Barfoot; Jack Collier
This paper presents a Learning-based Nonlinear Model Predictive Control (LB-NMPC) algorithm for an autonomous mobile robot to reduce path-tracking errors over repeated traverses along a reference path. The LB-NMPC algorithm uses a simple a priori vehicle model and a learned disturbance model. Disturbances are modelled as a Gaussian Process (GP) based on experience collected during previous traversals as a function of system state, input and other relevant variables. Modelling the disturbance as a GP enables interpolation and extrapolation of learned disturbances, a key feature of this algorithm. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, GPS-denied environments. The paper presents experimental results including over 1.8 km of travel by a four-wheeled, 50 kg robot travelling through challenging terrain (including steep, uneven hills) and by a six-wheeled, 160 kg robot learning disturbances caused by unmodelled dynamics at speeds ranging from 0.35 m/s to 1.0 m/s. The speed is scheduled to balance trial time, path-tracking errors, and localization reliability based on previous experience. The results show that the system can start from a generic a priori vehicle model and subsequently learn to reduce vehicle- and trajectory-specific path-tracking errors based on experience.
The International Journal of Robotics Research | 2016
Chris J. Ostafew; Angela P. Schoellig; Timothy D. Barfoot
This paper presents a Robust Constrained Learning-based Nonlinear Model Predictive Control (RC-LB-NMPC) algorithm for path-tracking in off-road terrain. For mobile robots, constraints may represent solid obstacles or localization limits. As a result, constraint satisfaction is required for safety. Constraint satisfaction is typically guaranteed through the use of accurate, a priori models or robust control. However, accurate models are generally not available for off-road operation. Furthermore, robust controllers are often conservative, since model uncertainty is not updated online. In this work our goal is to use learning to generate low-uncertainty, non-parametric models in situ. Based on these models, the predictive controller computes both linear and angular velocities in real-time, such that the robot drives at or near its capabilities while respecting path and localization constraints. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, off-road environments. The paper presents experimental results, including over 5 km of travel by a 900 kg skid-steered robot at speeds of up to 2.0 m/s. The result is a robust, learning controller that provides safe, conservative control during initial trials when model uncertainty is high and converges to high-performance, optimal control during later trials when model uncertainty is reduced with experience.
Journal of Field Robotics | 2017
Michael Paton; François Pomerleau; Kirk MacTavish; Chris J. Ostafew; Timothy D. Barfoot
Vision-based, autonomous, route-following algorithms enable robots to autonomously repeat manually driven routes over long distances. Through the use of inexpensive, commercial vision sensors, these algorithms have the potential to enable robotic applications across multiple industries. However, in order to extend these algorithms to long-term autonomy, they must be able to operate over long periods of time. This poses a difficult challenge for vision-based systems in unstructured and outdoor environments, where appearance is highly variable. While many techniques have been developed to perform localization across extreme appearance change, most are not suitable or untested for vision-in-the-loop systems such as autonomous route following, which requires continuous metric localization to keep the robot driving. In this paper, we present a vision-based, autonomous, route-following algorithm that combines multiple channels of information during localization to increase robustness against daily appearance change such as lighting. We explore this multichannel visual teach and repeat framework by adding the following channels of information to the basic single-camera, gray-scale, localization pipeline: images that are resistant to lighting change and images from additional stereo cameras to increase the algorithms field of view. Using these methods, we demonstrate robustness against appearance change through extensive field deployments spanning over 26 km with an autonomy rate greater than 99.9%. We furthermore discuss the limits of this system when subjected to harsh environmental conditions by investigating keypoint match degradation through time.
international conference on robotics and automation | 2015
Michael Paton; Kirk MacTavish; Chris J. Ostafew; Timothy D. Barfoot
Stereo Visual Teach & Repeat (VT&R) is a system for long-range, autonomous route following in unstructured 3D environments. As this system relies on a passive sensor to localize, it is highly susceptible to changes in lighting conditions. Recent work in the optics community has provided a method to transform images collected from a three-channel passive sensor into color-constant images that are resistant to changes in outdoor lighting conditions. This paper presents a lighting-resistant VT&R system that uses experimentally trained color-constant images to autonomously navigate difficult outdoor terrain despite changes in lighting. We show through an extensive field trial that our algorithm is capable of autonomously following a 1km outdoor route spanning sandy/rocky terrain, grassland, and wooded areas. Using a single visual map created at midday, the route was autonomously repeated 26 times over a period of four days, from sunrise to sunset with an autonomy rate (by distance) of over 99.9%. These experiments show that a simple image transformation can extend the operation of VT&R from a few hours to multiple days.
canadian conference on computer and robot vision | 2014
Chris J. Ostafew; Angela P. Schoellig; Timothy D. Barfoot; Jack Collier
A time-optimal speed schedule results in a mobile robot driving along a planned path at or near the limits of the robots capability. However, deriving models to predict the effect of increased speed can be very difficult. In this paper, we present a speed scheduler that uses previous experience, instead of complex models, to generate time-optimal speed schedules. The algorithm is designed for a vision-based, path-repeating mobile robot and uses experience to ensure reliable localization, low path-tracking errors, and realizable control inputs while maximizing the speed along the path. To our knowledge, this is the first speed scheduler to incorporate experience from previous path traversals in order to address system constraints. The proposed speed scheduler was tested in over 4 km of path traversals in outdoor terrain using a large Ackermann-steered robot travelling between 0.5 m/s and 2.0 m/s. The approach to speed scheduling is shown to generate fast speed schedules while remaining within the limits of the robots capability.
international conference on robotics and automation | 2015
Chris J. Ostafew; Angela P. Schoellig; Timothy D. Barfoot
Robust control maintains stability and performance for a fixed amount of model uncertainty but can be conservative since the model is not updated online. Learning-based control, on the other hand, uses data to improve the model over time but is not typically guaranteed to be robust throughout the process. This paper proposes a novel combination of both ideas: a robust Min-Max Learning-Based Nonlinear Model Predictive Control (MM-LB-NMPC) algorithm. Based on an existing LB-NMPC algorithm, we present an efficient and robust extension, altering the NMPC performance objective to optimize for the worst-case scenario. The algorithm uses a simple a priori vehicle model and a learned disturbance model. Disturbances are modelled as a Gaussian Process (GP) based on experience collected during previous trials as a function of system state, input, and other relevant variables. Nominal state sequences are predicted using an Unscented Transform and worst-case scenarios are defined as sequences bounding the 3σ confidence region. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, GPS-denied environments. The paper presents experimental results from testing on a 50 kg skid-steered robot executing a path-tracking task. The results show reductions in maximum lateral and heading path-tracking errors by up to 30% and a clear transition from robust control when the model uncertainty is high to optimal control when model uncertainty is reduced.
Applied Optics | 2007
M. Mony; Eric Bisaillon; Ehab Shoukry; Chris J. Ostafew; Etienne Grondin; Vincent Aimez; David V. Plant
A novel reprogrammable optical phase array (ROPA) device is presented as a reconfigurable electro-optic element. One specific application of the ROPA, a 1 x 6 electro-optic space switch, is fully described. Switching angles are within 2 degrees , and switching is achieved through a complementary metal-oxide semiconductor (CMOS) controlled, diffraction based, optical phase array in a bulk BaTiO3 crystal. The crystal is flip-chipped to the CMOS chip, creating a compact fully integrated device. The design, optical simulation, and fabrication of the device are described, and preliminary experimental results are presented.
lasers and electro-optics society meeting | 2006
Eric Bisaillon; D. T. H. Tan; Behnam Faraji; Y. Zeng; Chris J. Ostafew; R. Krishna-Prasad; Lukas Chrostowski; David V. Plant
We present a lithographically tunable, resonant subwavelength grating based Fabry-Perot cavity structure for mutliwavelength array devices. Theoretical and experimental performance of the cavity is discussed