Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Lehnert is active.

Publication


Featured researches published by Christopher Lehnert.


international conference on robotics and automation | 2017

Autonomous Sweet Pepper Harvesting for Protected Cropping Systems

Christopher Lehnert; Andrew English; Christopher McCool; Adam W. Tow; Tristan Perez

In this letter, we present a new robotic harvester (Harvey) that can autonomously harvest sweet pepper in protected cropping environments. Our approach combines effective vision algorithms with a novel end-effector design to enable successful harvesting of sweet peppers. Initial field trials in protected cropping environments, with two cultivar, demonstrate the efficacy of this approach achieving a 46% success rate for unmodified crop, and 58% for modified crop. Furthermore, for the more favourable cultivar we were also able to detach 90% of sweet peppers, indicating that improvements in the grasping success rate would result in greatly improved harvesting performance.


international conference on robotics and automation | 2017

Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting—Combined Color and 3-D Information

Inkyu Sa; Christopher Lehnert; Andrew English; Christopher McCool; Feras Dayoub; Ben Upcroft; Tristan Perez

This letter presents a three-dimensional (3-D) visual detection method for the challenging task of detecting peduncles of sweet peppers (Capsicum annuum) in the field. Cutting the peduncle cleanly is one of the most difficult stages of the harvesting process, where the peduncle is the part of the crop that attaches it to the main stem of the plant. Accurate peduncle detection in 3-D space is, therefore, a vital step in reliable autonomous harvesting of sweet peppers, as this can lead to precise cutting while avoiding damage to the surrounding plant. This letter makes use of both color and geometry information acquired from an RGB-D sensor and utilizes a supervised-learning approach for the peduncle detection task. The performance of the proposed method is demonstrated and evaluated by using qualitative and quantitative results [the area-under-the-curve (AUC) of the detection precision-recall curve]. We are able to achieve an AUC of 0.71 for peduncle detection on field-grown sweet peppers. We release a set of manually annotated 3-D sweet pepper and peduncle images to assist the research community in performing further research on this topic.


international conference on robotics and automation | 2016

Sweet pepper pose detection and grasping for automated crop harvesting

Christopher Lehnert; Inkyu Sa; Christopher McCool; Ben Upcroft; Tristan Perez

This paper presents a method for estimating the 6DOF pose of sweet-pepper (capsicum) crops for autonomous harvesting via a robotic manipulator. The method uses the Kinect Fusion algorithm to robustly fuse RGB-D data from an eye-in-hand camera combined with a colour segmentation and clustering step to extract an accurate representation of the crop. The 6DOF pose of the sweet peppers is then estimated via a nonlinear least squares optimisation by fitting a superellipsoid to the segmented sweet pepper. The performance of the method is demonstrated on a real 6DOF manipulator with a custom gripper. The method is shown to estimate the 6DOF pose successfully enabling the manipulator to grasp sweet peppers for a range of different orientations. The results obtained improve largely on the performance of grasping when compared to a naive approach, which does not estimate the orientation of the crop.


international conference on robotics and automation | 2017

The ACRV picking benchmark: A robotic shelf picking benchmark to foster reproducible research

Jürgen Leitner; Adam W. Tow; Niko Sünderhauf; Jake E. Dean; Joseph W. Durham; Matthew Cooper; Markus Eich; Christopher Lehnert; Ruben Mangels; Christopher McCool; Peter T. Kujala; Lachlan Nicholson; Trung Pham; James Sergeant; Liao Wu; Fangyi Zhang; Ben Upcroft; Peter Corke

Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark. Designed to be reproducible, it consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils. A well-defined evaluation protocol enables the comparison of complete robotic systems — including perception and manipulation — instead of sub-systems only. Our paper also describes and reports results achieved by an open baseline system based on a Baxter robot.


international conference on robotics and automation | 2016

Teaching Robots Generalizable Hierarchical Tasks Through Natural Language Instruction

Gavin Suddrey; Christopher Lehnert; Markus Eich; Frederic D. Maire; Jonathan M. Roberts

Natural language provides a convenient means of communicating information, and as such, is an ideal medium for enabling nonexpert users to teach robots novel tasks. However, in order to take advantage of natural language, a series of challenges must first be overcome. These challenges include the need to a) generalize learnt tasks to novel scenarios without retraining, b) resolve problems encountered during task execution, and c) derive implicit information from knowledge about the domain. To solve these challenges, this paper presents a novel approach to learning complex hierarchical tasks through natural language instruction, which not only allows learnt tasks to be generalized to novel situations without the need for retraining, but also enables an agent to derive implicit information from domain knowledge. Additionally, the approach presented in this paper enables the agent to infer task properties, such as preconditions and effects, directly from the explanation of the task flow. The authors validate the approach by demonstrating an implementation of the algorithms both on a simulated agent, as well as a Baxter robot. In each case, the agent is provided with a small set of primitive tasks for manipulating its workspace. From these primitives, the authors demonstrate the ability to teach the agent increasing complex tasks, including tasks of table cleaning, solely through natural language instructions.


Science & Engineering Faculty | 2013

μAV - Design and implementation of an open source micro quadrotor

Christopher Lehnert; Peter Corke


Institute for Future Environments; Science & Engineering Faculty | 2015

On Visual Detection of Highly-occluded Objects for Harvesting Automation in Horticulture

Inkyu Sa; Christopher McCool; Christopher Lehnert; Tristan Perez


international conference on robotics and automation | 2018

Semantic Segmentation from Limited Training Data

Anton Milan; Trung Pham; K. Vijay; Douglas Morrison; Adam W. Tow; Lingqiao Liu; J. Erskine; R. Grinover; A. Gurman; T. Hunn; N. Kelly-Boxall; Dong-Hee Lee; M. McTaggart; G. Rallos; A. Razjigaev; T. Rowntree; Tong Shen; Ryan N. Smith; S. Wade-McCue; Zheyu Zhuang; Christopher Lehnert; Guosheng Lin; Ian D. Reid; Peter Corke; Jürgen Leitner


international conference on robotics and automation | 2018

Cartman: The Low-Cost Cartesian Manipulator that Won the Amazon Robotics Challenge

Douglas Morrison; Adam W. Tow; M. McTaggart; Ryan N. Smith; N. Kelly-Boxall; S. Wade-McCue; J. Erskine; R. Grinover; A. Gurman; T. Hunn; Dong-Hee Lee; Anton Milan; Trung Pham; G. Rallos; A. Razjigaev; T. Rowntree; K. Vijay; Zheyu Zhuang; Christopher Lehnert; Ian D. Reid; Peter Corke; Jürgen Leitner


Journal of Field Robotics | 2017

Robot for weed species plant‐specific management

Owen Bawden; Jason Kulk; Ray Russell; Christopher McCool; Andrew English; Feras Dayoub; Christopher Lehnert; Tristan Perez

Collaboration


Dive into the Christopher Lehnert's collaboration.

Top Co-Authors

Avatar

Christopher McCool

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tristan Perez

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam W. Tow

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Inkyu Sa

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jürgen Leitner

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Andrew English

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Anton Milan

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

Feras Dayoub

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge