Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John W. Bastian is active.

Publication


Featured researches published by John W. Bastian.


international symposium on mixed and augmented reality | 2010

Interactive modelling for AR applications

John W. Bastian; Ben Ward; Rhys Hill; Anton van den Hengel; Anthony R. Dick

We present a method for estimating the 3D shape of an object from a sequence of images captured by a hand-held device. The method is well suited to augmented reality applications in that minimal user interaction is required, and the models generated are of an appropriate form. The method proceeds by segmenting the object in every image as it is captured and using the calculated silhouette to update the current shape estimate. In contrast to previous silhouettebased modelling approaches, however, the segmentation process is informed by a 3D prior based on the previous shape estimate. A voting scheme is also introduced in order to compensate for the inevitable noise in the camera position estimates. The combination of the voting scheme with the closed-loop segmentation process provides a robust and flexible shape estimation method. We demonstrate the approach on a number of scenes where segmentation without a 3D prior would be challenging.


Scientific Reports | 2016

Fast machine-learning online optimization of ultra-cold-atom experiments.

Paul Wigley; P. J. Everitt; A. van den Hengel; John W. Bastian; M. A. Sooriyabandara; Gordon McDonald; Kyle S. Hardman; C. D. Quinlivan; P. Manju; C. C. N. Kuhn; Ian R. Petersen; Andre Luiten; Joseph Hope; Nicholas Robins; Michael R. Hush

We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.


workshop on applications of computer vision | 2009

Automatic camera placement for large scale surveillance networks

Anton van den Hengel; Rhys Hill; Ben Ward; Alex Cichowski; Henry Detmold; Christopher S. Madden; Anthony R. Dick; John W. Bastian

Automatic placement of surveillance cameras in arbitrary buildings is a challenging task, and also one that is essential for efficient deployment of large scale surveillance networks. Existing approaches for automatic camera placement are either limited to a small number of cameras, or constrained in terms of the building layouts to which they can be applied. This paper describes a new method for determining the best placement for large numbers of cameras within arbitrary building layouts. The method takes as input a 3D model of the building, and uses a genetic algorithm to find a placement that optimises coverage and (if desired) overlap between cameras. Results are reported for an implementation of the method, including its application to a wide variety of complex buildings, both real and synthetic.


computer vision and pattern recognition | 2015

Part-based modelling of compound scenes from images

Anton van den Hengel; Chris Russell; Anthony R. Dick; John W. Bastian; Daniel Pooley; Lachlan Fleming; Lourdes Agapito

We propose a method to recover the structure of a compound scene from multiple silhouettes. Structure is expressed as a collection of 3D primitives chosen from a predefined library, each with an associated pose. This has several advantages over a volume or mesh representation both for estimation and the utility of the recovered model. The main challenge in recovering such a model is the combinatorial number of possible arrangements of parts. We address this issue by exploiting the intrinsic structure and sparsity of the problem, and show that our method scales to scenes constructed from large libraries of parts.


computer vision and pattern recognition | 2016

A Consensus-Based Framework for Distributed Bundle Adjustment

Anders Eriksson; John W. Bastian; Tat-Jun Chin; Mats Isaksson

In this paper we study large-scale optimization problems in multi-view geometry, in particular the Bundle Adjustment problem. In its conventional formulation, the complexity of existing solvers scale poorly with problem size, hence this component of the Structure-from-Motion pipeline can quickly become a bottle-neck. Here we present a novel formulation for solving bundle adjustment in a truly distributed manner using consensus based optimization methods. Our algorithm is presented with a concise derivation based on proximal splitting, along with a theoretical proof of convergence and brief discussions on complexity and implementation. Experiments on a number of real image datasets convincingly demonstrates the potential of the proposed method by outperforming the conventional bundle adjustment formulation by orders of magnitude.


international conference on image processing | 2013

Extended depth-of-field via focus stacking and graph cuts

Chao Zhang; John W. Bastian; Chunhua Shen; Anton van den Hengel; Tingzhi Shen

Optical lenses are only able to focus a single scene plane onto the sensor, leaving the remainder of the scene subject to varying levels of defocus. The apparent depth of field can be extended by capturing a sequence with varying focal planes that is merged by selecting, for each pixel in the target image, the most focused corresponding pixel from the stack. This process is heavily dependent on capturing a stabilised sequence-a requirement that is impractical for hand-held cameras. Here we have developed a novel method that can merge a focus stack captured by a hand-held camera despite changes in shooting position and focus. Our approach is able to register the sequence using affine transformation before fusing the focus stack. We have developed a merging process that is able to identify the focused pixels for each pixel in the stack and therefore select the most appropriate pixels for the synthetically focused image. We have proposed a novel approach for capturing qualified focus stack on mobile phone cameras. Furthermore, we test our approach on a mobile phone platform that can automatically capture a focus stack as easily as a photographer capturing a conventional image.


digital image computing: techniques and applications | 2005

Computing Surface-Based Photo-Consistency on Graphics Hardware

John W. Bastian; A. van den Hengel

This paper describes a novel approach to the problem of recovering information from an image set by comparing the radiance of hypothesised point correspondences. Our algorithm is applicable to a number of problems in computer vision, but is explained particularly in terms of recovering geometry from an image set. It uses the idea of photo-consistency to measure the confidence that a hypothesised scene description generated the reference images. Photo-consistency has been used in volumetric scene reconstruction where a hypothesised surface is evolved by considering one voxel at a time. Our approach is different: it represents the scene as a parameterised surface so decisions can be made about its photo-consistency simultaneously over the entire surface rather than a series of independent decisions. Our approach is further characterised by its ability to execute on graphics hardware. Experiments demonstrate that our cost function minimises at the solution and is not adversely affected by occlusion.


Brain and Cognition | 2015

Visual asymmetries for relative depth judgments in a three-dimensional space

Ancret Szpak; Tobias Loetscher; John W. Bastian; Nicole A. Thomas; Michael E. R. Nicholls

Our ability to process information about an objects location in depth varies along the horizontal and vertical axes. These variations reflect functional specialisation of the cerebral hemispheres as well as the ventral/dorsal visual streams for processing stimuli located in near and far space. Prior research has demonstrated visual field superiorities for processing near space in the lower and right hemispaces and for far space in the upper and left hemispaces. No research, however, has directly tested whether the functional specialisation of the visual fields actually makes objects look closer when presented in the lower or right visual fields. To measure biases in the perception of depth, we employed anaglyph stimuli where participants made closer/further judgments about the relative location of two spheres in a three-dimensional virtual space. We observed clear processing differences in this task where participants perceived the right and lower spheres to be closer and the left and upper spheres to be further away. Furthermore, no relationship between the horizontal and vertical dimensions was observed suggesting separate cognitive/neural mechanisms. Not only does this methodology clearly demonstrate differences in perceived depth across the visual field, it also opens up many possibilities for studying functional asymmetries in three-dimensional space.


european conference on computer vision | 2014

A Model-Based Approach to Recovering the Structure of a Plant from Images

Ben Ward; John W. Bastian; Anton van den Hengel; Daniel Pooley; Rajendra Bari; Bettina Berger; Mark Tester

We present a method for recovering the structure of a plant directly from a small set of widely-spaced images for automated analysis of phenotype. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is composed of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, without manual intervention.


digital image computing: techniques and applications | 2003

Computing Image-Based Reprojection Error on Graphics Hardware

John W. Bastian; Anton van den Hengel

Collaboration


Dive into the John W. Bastian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Ward

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rhys Hill

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge