Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl Wellington is active.

Publication


Featured researches published by Carl Wellington.


Autonomous Robots | 2002

A System for Semi-Autonomous Tractor Operations

Anthony Stentz; Cristian Dima; Carl Wellington; Herman Herman; David Stager

Tractors are the workhorses of the modern farm. By automating these machines, we can increase the productivity, improve safety, and reduce costs for many agricultural operations. Many researchers have tested computer-controlled machines for farming, but few have investigated the larger issues such as how humans can supervise machines and work amongst them. In this paper, we present a system for tractor automation. A human programs a task by driving the relevant routes. The task is divided into subtasks and assigned to a fleet of tractors that drive portions of the routes. Each tractor uses on-board sensors to detect people, animals, and other vehicles in the path of the machine, stopping for such obstacles until it receives advice from a supervisor over a wireless link. A first version of the system was implemented on a single tractor. Several features of the system were validated, including accurate path tracking, the detection of obstacles based on both geometric and non-geometric properties, and self-monitoring to determine when human intervention is required. Additionally, the complete system was tested in a Florida orange grove, where it autonomously drove seven kilometers.


The International Journal of Robotics Research | 2006

A Generative Model of Terrain for Autonomous Navigation in Vegetation

Carl Wellington; Aaron C. Courville; Anthony Stentz

Current approaches to off-road autonomous navigation are often limited by their ability to build a terrain model from sensor data. Available sensors make very indirect measurements of quantities of interest such as the supporting ground height and the location of obstacles, especially in domains where vegetation may hide the ground surface or partially obscure obstacles. A generative, probabilistic terrain model is introduced that exploits natural structure found in off-road environments to constrain the problem and use ambiguous sensor data more effectively. The model includes two Markov random fields that encode the assumptions that ground heights smoothly vary and terrain classes tend to cluster. The model also includes a latent variable that encodes the assumption that vegetation of a single type has a similar height. The model parameters can be trained by simply driving through representative terrain. Results from a number of challenging test scenarios in an agricultural domain reveal that exploiting the 3D structure inherent in outdoor domains significantly improves ground estimates and obstacle detection accuracy, and allows the system to infer the supporting ground surface even when it is hidden under dense vegetation.


robotics: science and systems | 2005

Interacting Markov Random Fields for Simultaneous Terrain Modeling and Obstacle Detection.

Carl Wellington; Aaron C. Courville; Anthony Stentz

Autonomous navigation in outdoor environments with vegetation is difficult because available sensors make very indirect measurements on quantities of interest such as the supporting ground height and the location of obstacles. We introduce a terrain model that includes spatial constraints on these quantities to exploit structure found in outdoor domains and use available sensor data more effectively. The model consists of a latent variable that establishes a prior that favors vegetation of a similar height, plus multiple Markov random fields that incorporate neighborhood interactions and impose a prior on smooth ground and class continuity. These Markov random fields interact through a hidden semi-Markov model that enforces a prior on the vertical structure of elements in the environment. The system runs in real-time and has been trained and tested using real data from an agricultural setting. Results show that exploiting the 3D structure inherent in outdoor domains significantly improves ground height estimates and obstacle detection accuracy.


Journal of Field Robotics | 2015

CHIMP, the CMU Highly Intelligent Mobile Platform

Anthony Stentz; Herman Herman; Alonzo Kelly; Eric Meyhofer; G. Clark Haynes; David Stager; Brian Zajac; J. Andrew Bagnell; Jordan Brindza; Christopher M. Dellin; Michael David George; Jose Gonzalez-Mora; Sean Hyde; Morgan Jones; Michel Laverne; Maxim Likhachev; Levi Lister; Matthew Powers; Oscar Ramos; Justin Ray; David Rice; Justin Scheifflee; Raumi Sidki; Siddhartha S. Srinivasa; Kyle Strabala; Jean-Philippe Tardif; Jean-Sebastien Valois; Michael Vande Weghe; Michael D. Wagner; Carl Wellington

We have developed the CHIMP CMU Highly Intelligent Mobile Platform robot as a platform for executing complex tasks in dangerous, degraded, human-engineered environments. CHIMP has a near-human form factor, work-envelope, strength, and dexterity to work effectively in these environments. It avoids the need for complex control by maintaining static rather than dynamic stability. Utilizing various sensors embedded in the robots head, CHIMP generates full three-dimensional representations of its environment and transmits these models to a human operator to achieve latency-free situational awareness. This awareness is used to visualize the robot within its environment and preview candidate free-space motions. Operators using CHIMP are able to select between task, workspace, and joint space control modes to trade between speed and generality. Thus, they are able to perform remote tasks quickly, confidently, and reliably, due to the overall design of the robot and software. CHIMPs hardware was designed, built, and tested over 15i¾?months leading up to the DARPA Robotics Challenge. The software was developed in parallel using surrogate hardware and simulation tools. Over a six-week span prior to the DRC Trials, the software was ported to the robot, the system was debugged, and the tasks were practiced continuously. Given the aggressive schedule leading to the DRC Trials, development of CHIMP focused primarily on manipulation tasks. Nonetheless, our team finished 3rd out of 16. With an upcoming year to develop new software for CHIMP, we look forward to improving the robots capability and increasing its speed to compete in the DRC Finals.


international conference on robotics and automation | 2004

Online adaptive rough-terrain navigation vegetation

Carl Wellington; Anthony Stentz

Autonomous navigation in vegetation is challenging because the vegetation often hides the load-bearing surface, which is used for evaluating the safety of potential actions. It is difficult to design rules for finding the true ground height in vegetation from forward looking sensor data, so we use an online adaptive method to automatically learn this mapping through experience with the world. This approach has been implemented on an autonomous tractor and has been tested in a farm setting. We describe the system and provide examples of finding obstacles and improving roll predictions in the presence of vegetation. We also show that the system can adapt to new vegetation conditions.


international conference on robotics and automation | 2011

PVS: A system for large scale outdoor perception performance evaluation

Cristian Dima; Carl Wellington; Stewart J. Moorehead; Levi Lister; Joan Campoy; Carlos Vallespi; Boyoon Jung; Michio Kise; Zachary T. Bonefas

This paper describes the motivation, design and implementation of a Perception Validation System (PVS), a system for measuring the outdoor perception performance of an autonomous vehicle. The PVS relies on using large amounts of real world data and ground truth information to quantify performance aspects such as the rate of false positive or false negative detections of an obstacle detection system. Our system relies on a relational database infrastructure to achieve a high degree of flexibility in the type of analyses it can support.


Proceedings of SPIE | 2010

R-Gator: an unmanned utility vehicle

Stewart J. Moorehead; Carl Wellington; Heidi Paulino; John F. Reid

The R-Gator is an unmanned ground vehicle built on the John Deere 6x4 M-Gator utility vehicle chassis. The vehicle is capable of operating in urban and off-road terrain and has a large payload to carry supplies, wounded, or a marsupial robot. The R-Gator has 6 modes of operation: manual driving, teleoperation, waypoint, direction drive, playback and silent sentry. In direction drive the user specifies a direction for the robot. It will continue in that direction, avoiding obstacles, until given a new direction. Playback allows previously recorded paths, from any other mode including manual, to be played back and repeated. Silent sentry allows the engine to be turned off remotely while cameras, computers and comms remain powered by batteries. In this mode the vehicle stays quiet and stationary, collecting valuable surveillance information. The user interface consists of a wearable computer, monocle and standard video game controller. All functions of the R-Gator can be controlled by the handheld game controller, using at most 2 button presses. This easy to use user interface allows even untrained users to control the vehicle. This paper details the systems developed for the R-Gator, focusing on the novel user interface and the obstacle detection system, which supports safeguarded teleoperation as well as full autonomous operation in off-road terrain. The design for a new 4-wheel, independent suspension chassis version of the R-Gator is also presented.


international symposium on safety, security, and rescue robotics | 2015

People in the weeds: Pedestrian detection goes off-road

Trenton Tabor; Zachary A. Pezzementi; Carlos Vallespi; Carl Wellington

Robotics offers a great opportunity to improve efficiency while also improving safety, but reliable detection of humans in off-road environments remains a key challenge. We present a person detector evaluation on a dataset collected from an autonomous tractor in an off-road environment representing challenging conditions with significant occlusion from weeds and branches as well as non-standing poses. We apply three image-only algorithms from urban pedestrian detection to better understand how well these approaches work in this domain. We evaluate the Aggregate Channel Features (ACF) and Deformable Parts Model (DPM) algorithms from the literature, as well as our own implementation of a Convolutional Neural Network (CNN). We show that the traditional performance metric used in the pedestrian detection literature is extremely sensitive to parameterization. When applied in domains like this one, where localization is challenging due to high background texture and occlusion, the choice of overlap threshold strongly affects measured performance. Using a permissive overlap threshold, we found that ACF, DPM, and CNN perform similarly overall in this domain, although they each have different failure modes.


Journal of Field Robotics | 2018

Comparing apples and oranges: Off-road pedestrian detection on the National Robotics Engineering Center agricultural person-detection dataset

Zachary A. Pezzementi; Trenton Tabor; Peiyun Hu; Jonathan K. Chang; Deva Ramanan; Carl Wellington; Benzun P. Wisely Babu; Herman Herman

Person detection from vehicles has made rapid progress recently with the advent of multiple high-quality datasets of urban and highway driving, yet no large-scale benchmark is available for the same problem in off-road or agricultural environments. Here we present the National Robotics Engineering Center (NREC) Agricultural Person-Detection Dataset to spur research in these environments. It consists of labeled stereo video of people in orange and apple orchards taken from two perception platforms (a tractor and a pickup truck), along with vehicle position data from Real Time Kinetic (RTK) GPS. We define a benchmark on part of the dataset that combines a total of 76k labeled person images and 19k sampled person-free images. The dataset highlights several key challenges of the domain, including varying environment, substantial occlusion by vegetation, people in motion and in nonstandard poses, and people seen from a variety of distances; metadata are included to allow targeted evaluation of each of these effects. Finally, we present baseline detection performance results for three leading approaches from urban pedestrian detection and our own convolutional neural network approach that benefits from the incorporation of additional image context. We show that the success of existing approaches on urban data does not transfer directly to this domain.


Archive | 2005

Method and system for estimating navigability of terrain

Anthony Stentz; Carl Wellington

Collaboration


Dive into the Carl Wellington's collaboration.

Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Herman Herman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Joan Campoy

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Carlos Vallespi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Cristian Dima

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Lav R. Khot

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trenton Tabor

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge