Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Ollis is active.

Publication


Featured researches published by Mark Ollis.


Autonomous Robots | 2002

The Demeter System for Automated Harvesting

Thomas Pilarski; Michael Happold; Henning Pangels; Mark Ollis; Kerien Fitzpatrick; Anthony Stentz

Automation of agricultural harvesting equipment in the near term appears both economically viable and technically feasible. This paper describes the Demeter system for automated harvesting. Demeter is a computer-controlled speedrowing machine, equipped with a pair of video cameras and a global positioning sensor for navigation. Demeter is capable of planning harvesting operations for an entire field, and then executing its plan by cutting crop rows, turning to cut successive rows, repositioning itself in the field, and detecting unexpected obstacles. In August of 1997, the Demeter system autonomously harvested 40 hectares (100 acres) of crop in a continuous run (excluding stops for refueling). During 1998, the Demeter system harvested in excess of 48.5 hectares (120 acres) of crop, cutting in a variety of fields.


intelligent robots and systems | 1997

Vision-based perception for an automated harvester

Mark Ollis; Anthony Stentz

This paper describes a vision-based perception system which has been used to guide an automated harvester cutting fields of alfalfa hay. The system tracks the boundary between cut and uncut crop; indicates when the end of a crop row has been reached; and identifies obstacles in the harvesters path. The system adapts to local variations in lighting and crop conditions, and explicitly models and removes noise due to shadow. In field tests, the machine has successfully operated in four different locations, at sites in Pennsylvania, Kansas, and California. Using the vision system as the sole means of guidance, over 60 acres have been cut at speeds of up to 4.5 mph (typical human operating speeds range from 3-6 mph). Future work largely centers around combining vision and GPS based navigation techniques to produce a commercially viable product for use either as a navigation aid or for a completely autonomous system.


robotics science and systems | 2006

Enhancing Supervised Terrain Classification with Predictive Unsupervised Learning

Michael Happold; Mark Ollis; Nikolas Johnson

This paper describes a method for classifying the traversability of terrain by combining unsupervised learning of color models that predict scene geometry with supervised learning of the relationship between geometric features and traversability. A neural network is trained offline on hand-labeled geometric features computed from stereo data. An online process learns the association between color and geometry, enabling the robot to assess the traversability of regions for which there is little range information by estimating the geometry from the color of the scene and passing this to the neural network. This online process is continuous and extremely rapid, which allows for quick adaptations to different lighting conditions and terrain changes. The sensitivity of the traversability judgment is further adjusted online by feedback from the robot’s bumper. Terrain assessments from the color classifier are merged with pure geometric classifications in an occupancy grid by computing the intersection of the ray associated with a pixel with a ground plane computed from the stereo range data. We present results from DARPA-conducted tests that demonstrate its effectiveness in a variety of outdoor environments.


international conference on robotics and automation | 1996

First results in vision-based crop line tracking

Mark Ollis; Anthony Stentz

Automation of agricultural harvesting equipment in the near term appears both economically viable and technically feasible. This paper describes a vision-based algorithm which guides a harvester by tracking the line between cut and uncut crop. Using this algorithm, a harvester has successfully cut roughly one acre of crop to date, at speeds of up to 4.5 miles an hour in an actual alfalfa field. A broad range of methods for detecting the crop cut boundary were considered, including both range-based and vision-based techniques; several of these methods were implemented and evaluated on data from an alfalfa field. The final crop-line detection algorithm is presented, which operates by computing the best-fit step function of a normalized-color measure of each row of an RGB image. Results of the algorithm on some sample crop images are shown, and potential improvements are discussed.


intelligent robots and systems | 2007

A Bayesian approach to imitation learning for robot navigation

Mark Ollis; Wesley H. Huang; Michael Happold

Driving in unknown natural outdoor terrain is a challenge for autonomous ground vehicles. It can be difficult for a robot to discern obstacles and other hazards in its environment, and characteristics of this high cost terrain may change from one environment to another, or even with different lighting conditions. One successful approach to this problem is for a robot to learn from a demonstration by a human operator. In this paper, we describe an approach to calculating terrain costs from Bayesian estimates using feature vectors measured during a short teleoperated training run in similar terrain and conditions. We describe the theory, its implementation on two different robotic systems, and results of several independently conducted field tests.


systems, man and cybernetics | 2006

Autonomous Learning of Terrain Classification within Imagery for Robot Navigation

Michael Happold; Mark Ollis

Stereo matching in unstructured, outdoor environments is often confounded by the complexity of the scenery and thus may yield only sparse disparity maps. Two-dimensional visual imagery, on the other hand, offers dense information about the environment of mobile robots, but is often difficult to exploit. Training a supervised classifier to identify traversable regions within images that generalizes well across a large variety of environments requires a vast corpus of labeled examples. Autonomous learning of the traversable/untraversable distinction indicated by scene appearance is therefore a highly desirable goal of robot vision. We describe here a system for learning this distinction online without the involvement of a human supervisor. The system takes in imagery and range data from a pair of stereo cameras mounted on a small mobile robot and autonomously learns to produce a labeling of scenery. Supervision of the learning process is entirely through information gathered from range data. Two types of boosted weak learners, Nearest Means and naive Bayes, are trained on this autonomously labeled corpus. The resulting classified images provide dense information about the environment which can be used to fill-in regions where stereo cannot find matches or in lieu of stereo to direct robot navigation. This method has been tested across a large array of environment types and can produce very accurate labelings of scene imagery as judged by human experts and compared against purely geometric-based labelings. Because it is online and rapid, it eliminates some of the problems related to color constancy and dynamic environments.


Archive | 2007

Using Learned Features from 3D Data for Robot Navigation

Michael Happold; Mark Ollis

Summary. We describe a novel method for classifying terrain in unstructured, natural environments for the purpose of aiding mobile robot navigation. This method operates on range data provided by stereo without the traditional preliminary extraction of geometric features such as height and slope, replacing these measurements with 2D histograms representing the shape and permeability of objects within a local region. A convolutional neural network is trained to categorize the histogram samples according to the traversability of the terrain they represent for a small mobile robot. In live and offline testing in a wide variety of environments, it demonstrates state-of-the-art performance.


Archive | 1998

Agricultural harvester with robotic control

Henning Pangels; Thomas Pilarski; Kerien Fitzpatrick; Michael Happold; Mark Ollis; Anthony Stentz


Archive | 1999

Infrastructure independent position determining system

Alonzo Kelly; Robert Craig Coulter; Mark Ollis


Archive | 1997

Vision-based crop line tracking for harvesters

Anthony Stentz; Mark Ollis; Kerien Fitzpatrick; Regis Hoffman

Collaboration


Dive into the Mark Ollis's collaboration.

Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael Happold

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Herman Herman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Bares

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bryan G Campbell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David K Herdle

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Frank Higgins

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Regis Hoffman

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge