Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher McCool is active.

Publication


Featured researches published by Christopher McCool.


international conference on pattern recognition | 2004

Face authentication test on the BANCA database

Kieron Messer; Josef Kittler; Mohammad T. Sadeghi; Miroslav Hamouz; A. Kostin; Fabien Cardinaux; Sébastien Marcel; Samy Bengio; Conrad Sanderson; Norman Poh; Yann Rodriguez; Jacek Czyz; Luc Vandendorpe; Christopher McCool; Scott Lowther; Sridha Sridharan; Vinod Chandran; R.P. Palacios; Enrique Vidal; Li Bai; Linlin Shen; Yan Wang; Chiang Yueh-Hsuan; Liu Hsien-Chang; Hung Yi-Ping; A. Heinrichs; M. Muller; Andreas Tewes; C. von der Malsburg; Rolf P. Würtz

This work details the results of a face authentication test (FAT2004) (http://www.ee.surrey.ac.uk/banca/icpr2004) held in conjunction with the 17th International Conference on Pattern Recognition. The contest was held on the publicly available BANCA database (http://www.ee.surrey.ac.uk/banca) according to a defined protocol (E. Bailly-Bailliere et al., June 2003). The competition also had a sequestered part in which institutions had to submit their algorithms for independent testing. 13 different verification algorithms from 10 institutions submitted results. Also, a standard set of face recognition software packages from the Internet (http://www.cs.colostate.edu/evalfacerec) were used to provide a baseline performance measure.


Sensors | 2016

DeepFruits: A Fruit Detection System Using Deep Neural Networks

Inkyu Sa; ZongYuan Ge; Feras Dayoub; Ben Upcroft; Tristan Perez; Christopher McCool

This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0.807 to 0.838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit.


international conference on image processing | 2015

Modelling local deep convolutional neural network features to improve fine-grained image classification

ZongYuan Ge; Christopher McCool; Conrad Sanderson; Peter Corke

We propose a local modelling approach using deep convolutional neural networks (CNNs) for fine-grained image classification. Recently, deep CNNs trained from large datasets have considerably improved the performance of object recognition. However, to date there has been limited work using these deep CNNs as local feature extractors. This partly stems from CNNs having internal representations which are high dimensional, thereby making such representations difficult to model using stochastic models. To overcome this issue, we propose to reduce the dimensionality of one of the internal fully connected layers, in conjunction with layer-restricted retraining to avoid retraining the entire network. The distribution of low-dimensional features obtained from the modified layer is then modelled using a Gaussian mixture model. Comparative experiments show that considerable performance improvements can be achieved on the challenging Fish and UEC FOOD-100 datasets.


workshop on applications of computer vision | 2015

Evaluation of Features for Leaf Classification in Challenging Conditions

David Hall; Christopher McCool; Feras Dayoub; Niko Sünderhauf; Ben Upcroft

Fine-grained leaf classification has concentrated on the use of traditional shape and statistical features to classify ideal images. In this paper we evaluate the effectiveness of traditional hand-crafted features and propose the use of deep convolutional neural network (Conv Net) features. We introduce a range of condition variations to explore the robustness of these features, including: translation, scaling, rotation, shading and occlusion. Evaluations on the Flavia dataset demonstrate that in ideal imaging conditions, combining traditional and Conv Net features yields state-of-the art performance with an average accuracy of 97.3%±0:6% compared to traditional features which obtain an average accuracy of 91.2%±1:6%. Further experiments show that this combined classification approach consistently outperforms the best set of traditional features by an average of 5.7% for all of the evaluated condition variations.


international conference on robotics and automation | 2017

Autonomous Sweet Pepper Harvesting for Protected Cropping Systems

Christopher Lehnert; Andrew English; Christopher McCool; Adam W. Tow; Tristan Perez

In this letter, we present a new robotic harvester (Harvey) that can autonomously harvest sweet pepper in protected cropping environments. Our approach combines effective vision algorithms with a novel end-effector design to enable successful harvesting of sweet peppers. Initial field trials in protected cropping environments, with two cultivar, demonstrate the efficacy of this approach achieving a 46% success rate for unmodified crop, and 58% for modified crop. Furthermore, for the more favourable cultivar we were also able to detach 90% of sweet peppers, indicating that improvements in the grasping success rate would result in greatly improved harvesting performance.


computer vision and pattern recognition | 2015

Subset feature learning for fine-grained category classification

ZongYuan Ge; Christopher McCool; Conrad Sanderson; Peter Corke

Fine-grained categorisation has been a challenging problem due to small inter-class variation, large intra-class variation and low number of training images. We propose a learning system which first clusters visually similar classes and then learns deep convolutional neural network features specific to each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset show that the proposed method outperforms recent fine-grained categorisation methods under the most difficult setting: no bounding boxes are presented at test time. It achieves a mean accuracy of 77.5%, compared to the previous best performance of 73.2%. We also show that progressive transfer learning allows us to first learn domain-generic features (for bird classification) which can then be adapted to specific set of bird classes, yielding improvements in accuracy.


international conference on robotics and automation | 2017

Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting—Combined Color and 3-D Information

Inkyu Sa; Christopher Lehnert; Andrew English; Christopher McCool; Feras Dayoub; Ben Upcroft; Tristan Perez

This letter presents a three-dimensional (3-D) visual detection method for the challenging task of detecting peduncles of sweet peppers (Capsicum annuum) in the field. Cutting the peduncle cleanly is one of the most difficult stages of the harvesting process, where the peduncle is the part of the crop that attaches it to the main stem of the plant. Accurate peduncle detection in 3-D space is, therefore, a vital step in reliable autonomous harvesting of sweet peppers, as this can lead to precise cutting while avoiding damage to the surrounding plant. This letter makes use of both color and geometry information acquired from an RGB-D sensor and utilizes a supervised-learning approach for the peduncle detection task. The performance of the proposed method is demonstrated and evaluated by using qualitative and quantitative results [the area-under-the-curve (AUC) of the detection precision-recall curve]. We are able to achieve an AUC of 0.71 for peduncle detection on field-grown sweet peppers. We release a set of manually annotated 3-D sweet pepper and peduncle images to assist the research community in performing further research on this topic.


workshop on applications of computer vision | 2016

Fine-grained classification via mixture of deep convolutional neural networks

ZongYuan Ge; Alex Bewley; Christopher McCool; Peter Corke; Ben Upcroft; Conrad Sanderson

We present a novel deep convolutional neural network (DCNN) system for fine-grained image classification, called a mixture of DCNNs (MixDCNN). The fine-grained image classification problem is characterised by large intra-class variations and small inter-class variations. To overcome these problems our proposed MixDCNN system partitions images into K subsets of similar images and learns an expert DCNN for each subset. The output from each of the K DCNNs is combined to form a single classification decision. In contrast to previous techniques, we provide a formulation to perform joint end-to-end training of the K DCNNs simultaneously. Extensive experiments, on three datasets using two network structures (AlexNet and GoogLeNet), show that the proposed MixDCNN system consistently outperforms other methods. It provides a relative improvement of 12.7% and achieves state-of-the-art results on two datasets.


international conference on robotics and automation | 2016

Sweet pepper pose detection and grasping for automated crop harvesting

Christopher Lehnert; Inkyu Sa; Christopher McCool; Ben Upcroft; Tristan Perez

This paper presents a method for estimating the 6DOF pose of sweet-pepper (capsicum) crops for autonomous harvesting via a robotic manipulator. The method uses the Kinect Fusion algorithm to robustly fuse RGB-D data from an eye-in-hand camera combined with a colour segmentation and clustering step to extract an accurate representation of the crop. The 6DOF pose of the sweet peppers is then estimated via a nonlinear least squares optimisation by fitting a superellipsoid to the segmented sweet pepper. The performance of the method is demonstrated on a real 6DOF manipulator with a custom gripper. The method is shown to estimate the 6DOF pose successfully enabling the manipulator to grasp sweet peppers for a range of different orientations. The results obtained improve largely on the performance of grasping when compared to a naive approach, which does not estimate the orientation of the crop.


workshop on applications of computer vision | 2014

Local inter-session variability modelling for object classification

Kaneswaran Anantharajah; ZongYuan Ge; Christopher McCool; Simon Denman; Clinton Fookes; Peter Corke; Dian Tjondronegoro; Sridha Sridharan

Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.

Collaboration


Dive into the Christopher McCool's collaboration.

Top Co-Authors

Avatar

Tristan Perez

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher Lehnert

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feras Dayoub

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sridha Sridharan

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Inkyu Sa

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Vinod Chandran

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Clinton Fookes

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge