Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew English is active.

Publication


Featured researches published by Andrew English.


Journal of Field Robotics | 2016

Vision-based Obstacle Detection and Navigation for an Agricultural Robot

David Ball; Ben Upcroft; Gordon Wyeth; Peter Corke; Andrew English; Patrick Ross; Timothy Patten; Robert Fitch; Salah Sukkarieh; Andrew Bate

This paper describes a vision-based obstacle detection and navigation system for use as part of a robotic solution for the sustainable intensification of broad-acre agriculture. To be cost-effective, the robotics solution must be competitive with current human-driven farm machinery. Significant costs are in high-end localization and obstacle detection sensors. Our system demonstrates a combination of an inexpensive global positioning system and inertial navigation system with vision for localization and a single stereo vision system for obstacle detection. The paper describes the design of the robot, including detailed descriptions of three key parts of the system: novelty-based obstacle detection, visually-aided guidance, and a navigation system that generates collision-free kinematically feasible paths. The robot has seen extensive testing over numerous weeks of field trials during the day and night. The results in this paper pertain to one particular 3 h nighttime experiment in which the robot performed a coverage task and avoided obstacles. Additional results during the day demonstrate that the robot is able to continue operating during 5 min GPS outages by visually following crop rows.


Science & Engineering Faculty | 2015

Robotics for sustainable broad-acre agriculture

David Ball; Patrick Ross; Andrew English; Timothy Patten; Ben Upcroft; Robert Fitch; Salah Sukkarieh; Gordon Wyeth; Peter Corke

This paper describes the development of small low-cost cooperative robots for sustainable broad-acre agriculture to increase broad-acre crop production and reduce environmental impact. The current focus of the project is to use robotics to deal with resistant weeds, a critical problem for Australian farmers. To keep the overall system affordable our robot uses low-cost cameras and positioning sensors to perform a large scale coverage task while also avoiding obstacles. A multi-robot coordinator assigns parts of a given field to individual robots. The paper describes the modification of an electric vehicle for autonomy and experimental results from one real robot and twelve simulated robots working in coordination for approximately two hours on a 55 hectare field in Emerald Australia. Over this time the real robot ‘sprayed’ 6 hectares missing 2.6% and overlapping 9.7% within its assigned field partition, and successfully avoided three obstacles.


international conference on robotics and automation | 2017

Autonomous Sweet Pepper Harvesting for Protected Cropping Systems

Christopher Lehnert; Andrew English; Christopher McCool; Adam W. Tow; Tristan Perez

In this letter, we present a new robotic harvester (Harvey) that can autonomously harvest sweet pepper in protected cropping environments. Our approach combines effective vision algorithms with a novel end-effector design to enable successful harvesting of sweet peppers. Initial field trials in protected cropping environments, with two cultivar, demonstrate the efficacy of this approach achieving a 46% success rate for unmodified crop, and 58% for modified crop. Furthermore, for the more favourable cultivar we were also able to detach 90% of sweet peppers, indicating that improvements in the grasping success rate would result in greatly improved harvesting performance.


international conference on robotics and automation | 2017

Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting—Combined Color and 3-D Information

Inkyu Sa; Christopher Lehnert; Andrew English; Christopher McCool; Feras Dayoub; Ben Upcroft; Tristan Perez

This letter presents a three-dimensional (3-D) visual detection method for the challenging task of detecting peduncles of sweet peppers (Capsicum annuum) in the field. Cutting the peduncle cleanly is one of the most difficult stages of the harvesting process, where the peduncle is the part of the crop that attaches it to the main stem of the plant. Accurate peduncle detection in 3-D space is, therefore, a vital step in reliable autonomous harvesting of sweet peppers, as this can lead to precise cutting while avoiding damage to the surrounding plant. This letter makes use of both color and geometry information acquired from an RGB-D sensor and utilizes a supervised-learning approach for the peduncle detection task. The performance of the proposed method is demonstrated and evaluated by using qualitative and quantitative results [the area-under-the-curve (AUC) of the detection precision-recall curve]. We are able to achieve an AUC of 0.71 for peduncle detection on field-grown sweet peppers. We release a set of manually annotated 3-D sweet pepper and peduncle images to assist the research community in performing further research on this topic.


international conference on robotics and automation | 2015

Online novelty-based visual obstacle detection for field robotics

Patrick Ross; Andrew English; David Ball; Ben Upcroft; Peter Corke

This paper presents a novel online unsupervised vision system for obstacle detection in field environments which detects many obstacles pathological to appearance- or structure-only obstacle detection systems. Robust obstacle detection in field environments is challenging as it is infeasible to train on all possible obstacles in all conditions, and many obstacles are camouflaged in their appearance or structure. The proposed system combines novelty in structure and appearance cues to detect obstacles, can adapt over time to changes in the environment, and is suitable for long-term operation over changing lighting conditions in various environments. After an initial learning period the method exhibits very few false positives, while successfully detecting most obstacles over both daytime and nighttime datasets including challenging obstacles such as a person lying down in grass.


international conference on robotics and automation | 2015

TriggerSync: A time synchronisation tool

Andrew English; Patrick Ross; David Ball; Ben Upcroft; Peter Corke

This paper presents a framework for synchronising multiple triggered sensors with respect to a local clock using standard computing hardware. Providing sensor measurements with accurate and meaningful timestamps is important for many sensor fusion, state estimation and control applications. Accurately synchronising sensor timestamps can be performed with specialised hardware, however, performing sensor synchronisation using standard computing hardware and non-real-time operating systems is difficult due to inaccurate and temperature sensitive clocks, variable communication delays and operating system scheduling delays. Results show the ability of our framework to estimate time offsets to sub-millisecond accuracy. We also demonstrate how synchronising timestamps with our framework results in a tenfold reduction in image stabilisation error for a vehicle driving on rough terrain. The source code will be released as an open source tool for time synchronisation in ROS.


intelligent robots and systems | 2015

Learning crop models for vision-based guidance of agricultural robots

Andrew English; Patrick Ross; David Ball; Ben Upcroft; Peter Corke

This paper describes a vision-based method of guiding autonomous vehicles within crop rows in agricultural fields where the crop rows are challenging to detect or their appearance is not known a-priori. The location of the crop rows is estimated with an SVM regression algorithm using colour, texture and 3D structure descriptors from a forward facing stereo camera pair. Our system rapidly learns a model online with minimal user input, and then uses this model to track crop rows. Results demonstrate our method is able to learn and track a wide variety of crops with an RMS error of less than 3cm. We also present online control results demonstrating our system autonomously steering a robot for 3km.


IEEE Robotics & Automation Magazine | 2017

Farm Workers of the Future: Vision-Based Robotics for Broad-Acre Agriculture

David Ball; Patrick Ross; Andrew English; Peter Milani; Daniel Richards; Andrew Bate; Ben Upcroft; Gordon Wyeth; Peter Corke

Farmers are under growing pressure to intensify production to feed a growing population while managing environmental impact. Robotics has the potential to address these challenges by replacing large complex farm machinery with fleets of small autonomous robots. This article presents our research toward the goal of developing teams of autonomous robots that perform typical farm coverage operations. Making a large fleet of autonomous robots economical requires the use of inexpensive sensors, such as cameras for localization and obstacle avoidance. To this end, we describe a vision-based obstacle detection system that continually adapts to environmental and illumination variations and a visionassisted localization system that can guide a robot along crop rows with a complex appearance. Large fleets of robots will become time-consuming to monitor, control, and resupply. To reduce this burden, we describe a vision-based docking system for autonomously refilling liquid supplies and an interface for controlling multiple robots.


Journal of Field Robotics | 2017

Online covariance estimation for novelty‐based visual obstacle detection

Patrick Ross; Andrew English; David Ball

Robust obstacle detection remains a challenge for mobile robots traversing outdoor field environments. Obstacle detection systems that combine multiple cues can potentially overcome deficiencies in individual cues. A key challenge in designing multi-sensor obstacle detection systems is to automatically and appropriately combine these cues in an unsupervised manner. This paper presents a method for obstacle detection, which continuously adapts its obstacle definition and the weighting of each cue for the current conditions. The key contribution of this paper is a method for online covariance estimation for Parzen windows probability density estimation, which in this application determines the relative importance of each descriptor dimension. By iteratively estimating the covariance using small subsets of the available data, the proposed method is capable of converging to an approximate solution order of magnitude faster than standard optimizers, making it suitable for use online. The application of this covariance estimation method to our novelty-based obstacle detection system improves obstacle detection precision and reduces learning duration after environmental transitions. It also removes all environment-specific parameters from the method, and allows the descriptor to contain arbitrary data without time-consuming hand-tuning.


Science & Engineering Faculty | 2013

Low cost localisation for agricultural robotics

Andrew English; David Ball; Patrick Ross; Ben Upcroft; Gordon Wyeth; Peter Corke

Collaboration


Dive into the Andrew English's collaboration.

Top Co-Authors

Avatar

David Ball

Peter MacCallum Cancer Centre

View shared research outputs
Top Co-Authors

Avatar

Patrick Ross

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher Lehnert

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher McCool

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tristan Perez

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Feras Dayoub

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge