Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Bewley is active.

Publication


Featured researches published by Alex Bewley.


international conference on image processing | 2016

Simple online and realtime tracking

Alex Bewley; ZongYuan Ge; Lionel Ott; Fabio Ramos; Ben Upcroft

This paper explores a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and realtime applications. To this end, detection quality is identified as a key factor influencing tracking performance, where changing the detector can improve tracking by up to 18.9%. Despite only using a rudimentary combination of familiar techniques such as the Kalman Filter and Hungarian algorithm for the tracking components, this approach achieves an accuracy comparable to state-of-the-art online trackers. Furthermore, due to the simplicity of our tracking method, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers.


international conference on robotics and automation | 2014

Online Self-Supervised Multi-Instance Segmentation of Dynamic Objects

Alex Bewley; Vitor Guizilini; Fabio Ramos; Ben Upcroft

This paper presents a method for the continuous segmentation of dynamic objects using only a vehicle mounted monocular camera without any prior knowledge of the objects appearance. Prior work in online static/dynamic segmentation [1] is extended to identify multiple instances of dynamic objects by introducing an unsupervised motion clustering step. These clusters are then used to update a multi-class classifier within a self-supervised framework. In contrast to many tracking-by-detection based methods, our system is able to detect dynamic objects without any prior knowledge of their visual appearance shape or location. Furthermore, the classifier is used to propagate labels of the same object in previous frames, which facilitates the continuous tracking of individual objects based on motion. The proposed system is evaluated using recall and false alarm metrics in addition to a new multi-instance labelled dataset to measure the performance of segmenting multiple instances of objects.


workshop on applications of computer vision | 2016

Fine-grained classification via mixture of deep convolutional neural networks

ZongYuan Ge; Alex Bewley; Christopher McCool; Peter Corke; Ben Upcroft; Conrad Sanderson

We present a novel deep convolutional neural network (DCNN) system for fine-grained image classification, called a mixture of DCNNs (MixDCNN). The fine-grained image classification problem is characterised by large intra-class variations and small inter-class variations. To overcome these problems our proposed MixDCNN system partitions images into K subsets of similar images and learns an expert DCNN for each subset. The output from each of the K DCNNs is combined to form a single classification decision. In contrast to previous techniques, we provide a formulation to perform joint end-to-end training of the K DCNNs simultaneously. Extensive experiments, on three datasets using two network structures (AlexNet and GoogLeNet), show that the proposed MixDCNN system consistently outperforms other methods. It provides a relative improvement of 12.7% and achieves state-of-the-art results on two datasets.


international conference on robotics and automation | 2016

Alextrac: Affinity learning by exploring temporal reinforcement within association chains

Alex Bewley; Lionel Ott; Fabio Ramos; Ben Upcroft

This paper presents a self-supervised approach for learning to associate object detections in a video sequence as often required in tracking-by-detection systems. In this paper we focus on learning an affinity model to estimate the data association cost, which can adapt to different situations by exploiting the sequential nature of video data. We also propose a framework for gathering additional training samples at test time with high variation in visual appearance, naturally inherent in large temporal windows. Reinforcing the model with these difficult samples greatly improves the affinity model compared to standard similarity measures such as cosine similarity. We experimentally demonstrate the efficacy of the resulting affinity model on several multiple object tracking (MOT) benchmark sequences. Using the affinity model alone places this approach in the top 25 state-of-the-art trackers with an average rank of 21.3 across 11 test sequences and an overall multiple object tracking accuracy (MOTA) of 17%. This is considerable as our simple approach only uses the appearance of the detected regions in contrast to other techniques with global optimisation or complex motion models.


ARC Centre of Excellence for Robotic Vision; Science & Engineering Faculty | 2016

From ImageNet to Mining: Adapting Visual Object Detection with Minimal Supervision

Alex Bewley; Ben Upcroft

This paper presents visual detection and classification of light vehicles and personnel on a mine site. We capitalise on the rapid advances of ConvNet based object recognition but highlight that a naive black box approach results in a significant number of false positives. In particular, the lack of domain specific training data and the unique landscape in a mine site causes a high rate of errors. We exploit the abundance of background-only images to train a k-means classifier to complement the ConvNet. Furthermore, localisation of objects of interest and a reduction in computation is enabled through region proposals. Our system is tested on over 10 km of real mine site data and we were able to detect both light vehicles and personnel. We show that the introduction of our background model can reduce the false positive rate by an order of magnitude.


international conference on image processing | 2015

Fine-grained bird species recognition via hierarchical subset learning

ZongYuan Ge; Christopher McCool; Conrad Sanderson; Alex Bewley; Peter Corke

We propose a novel method to improve fine-grained bird species classification based on hierarchical subset learning. We first form a similarity tree where classes with strong visual correlations are grouped into subsets. An expert local classifier with strong discriminative power to distinguish visually similar classes is then learnt for each subset. On the challenging Caltech200-2011 bird dataset we show that using the hierarchical approach with features derived from a deep convolutional neural network leads to the average accuracy improving from 64.5% to 72.7%, a relative improvement of 12.7%.


Journal of Field Robotics | 2017

Background Appearance Modeling with Applications to Visual Object Detection in an Open-Pit Mine

Alex Bewley; Ben Upcroft

This paper addresses the problem of detecting people and vehicles on a surface mine by presenting an architecture that combines the complementary strengths of deep convolutional networks DCN with cluster-based analysis. We highlight that using a DCN in a naive black box approach results in a significantly high rate of errors due to the lack of mining-specific training data and the unique landscape in a mine site. In this work, we propose a background model that exploits the abundance of background-only images to discover the natural clusters in visual appearance using features extracted from the DCN. Both a simple nearest cluster-based background model and an extended model with cosine features are investigated for their ability to identify and suppress potential false positives made by the DCN. Furthermore, localization of objects of interest is enabled through region proposals, which have been tuned to increase recall within the constraints of a computational budget. Finally, a soft fusion framework is presented to combine the estimates of both the DCN and background model to improve the accuracy of the detection. Our system is tested on over 11 km of real mine site data in both day and night conditions where we were able to detect both light and heavy vehicles along with mining personnel. We show that the introduction of our background model improves the detection performance. In particular, soft fusion of the background model and the DCN output produces a relative improvement in the F1 score of 46% and 28% compared to a baseline pretrained DCN and a DCN retrained with mining images, respectively.


international conference on image processing | 2017

Simple online and realtime tracking with a deep association metric

Nicolai Wojke; Alex Bewley; Dietrich Paulus


Science & Engineering Faculty | 2013

Advantages of exploiting projection structure for segmenting dense 3D point clouds

Alex Bewley; Ben Upcroft


neural information processing systems | 2017

Hierarchical Attentive Recurrent Tracking

Adam R. Kosiorek; Alex Bewley; Ingmar Posner

Collaboration


Dive into the Alex Bewley's collaboration.

Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

ZongYuan Ge

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher McCool

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rajiv Shekhar

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Nicolai Wojke

University of Koblenz and Landau

View shared research outputs
Researchain Logo
Decentralizing Knowledge