Mark Whitty
University of New South Wales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark Whitty.
intelligent robots and systems | 2009
Jun Zhao; Mark Whitty; Jayantha Katupitiya
Ground plane detection plays an important role in stereo vision based obstacle detection methods. Recently, V-Disparity image has been widely used for ground plane detection. The existing approach based on V-Disparity image can detect flat ground successfully but have difficulty in detecting non-flat ground. In this paper, we discuss the representation of non-flat ground in V-Disparity image, based on which we propose a method to detect non-flat ground using V-Disparity image.
Journal of Applied Logic | 2015
Scarlett Liu; Mark Whitty
Precise yield estimation in vineyards using image processing techniques has only been demonstrated conceptually on a small scale. Expanding this scale requires significant computational power where, by necessity, only small parts of the images of vines contain useful features. This paper introduces an image processing algorithm combining colour and texture information and the use of a support vector machine, to accelerate fruit detection by isolating and counting bunches in images. Experiments carried out on two varieties of red grapes (Shiraz and Cabernet Sauvignon) demonstrate an accuracy of 88.0% and recall of 91.6%. This method is also shown to remove the restriction on the field of view and background which plagued existing methods and is a first step towards precise and reliable yield estimation on a large scale.
Journal of Field Robotics | 2012
José E. Guivant; Stephen Cossell; Mark Whitty; Jayantha Katupitiya
The problem of remote-controlling a mobile robot through the Internet with its associated bandwidth constraints is addressed in this paper. Our solution combines a novel communication and processing module with a unique sensor layout and a flexible control architecture to achieve a range of capabilities from traditional teleoperation to point-and-click autonomy. Careful management of the available bandwidth enables a demonstration of these capabilities between nodes with 20,000 km separation while also providing real-time three-dimensional (3D) models of the environment through the Internet. A spatially oriented compression algorithm, integral to efficient bandwidth management, is also presented. Experiments establish the effectiveness of the extended situational awareness in improving the efficiency and accuracy of driving a mobile robot through a cluttered environment over the existing 2D map or video streaming methods.
Sensor Review | 2008
Lin Chi Mak; Mark Whitty; Tomonari Furukawa
Purpose – The purpose of this paper is to present a localisation system for an indoor rotary‐wing micro aerial vehicle (MAV) that uses three onboard LEDs and base station mounted active vision unit.Design/methodology/approach – A pair of blade mounted cyan LEDs and a tail mounted red LED are used as on‐board landmarks. A base station tracks the landmarks and estimates the pose of the MAV in real time by analysing images taken using an active vision unit. In each image, the ellipse formed by the cyan LEDs is used for 5 degree of freedom (DoF) pose estimation with yaw estimation from the red LED providing the 6th DoF.Findings – About 1‐3.5 per cent localisation error of the MAV at various ranges, rolls and angular speeds less than 45°/s relative to the base station at known location indicates that the MAV can be accurately localised at 9‐12 Hz in an indoor environment.Research limitations/implications – Line‐of‐sight between the base station and MAV is necessary while limited accuracy is evident in yaw esti...
international conference on indoor positioning and indoor navigation | 2012
Dylan Campbell; Mark Whitty; Samsung Lim
Existing approaches for indoor mapping are often either time-consuming or inaccurate. This paper presents the Continuous Normal Distributions Transform (C-NDT), an efficient approach to 3D indoor mapping that balances acquisition time, completeness and accuracy by registering scans acquired from a rotating LiDAR sensor mounted on a moving vehicle. C-NDT uses the robust Normal Distributions Transform (NDT) algorithm for scan registration, ensuring that the mapping is independent of the long-term quality of the odometry. We demonstrate that C-NDT produces more accurate maps than stand-alone dead-reckoning, achieves better map completeness than static scanning and is at least an order of magnitude faster than existing static scanning methods.
international conference on robotics and automation | 2011
Mark Whitty; José E. Guivant
This paper presents an efficient approach to global path planning for multiple agents during large-scale map deformation. The problem of planning using dense data during large-scale map deformation is addressed by using a hybrid metric-topological planner that maintains locally consistent policies. These policies are cached, providing efficiency gains relative to alternate planning approaches that are characterized using complexity analysis. Simulation results show the effectiveness of this approach in handling notable map deformation while achieving good efficiency.
intelligent robots and systems | 2009
Mark Whitty; José E. Guivant
This paper presents a framework for efficient path planning in a deformable map. A roadmap and local cost maps are combined and integrated into a generic SLAM process to provide fast path querying for multiple sources and multiple destinations. Analysis of a simple deformation metric shows the ability of the framework to efficiently maintain a consistent plan during major map adjustment by updating the roadmap and selected local cost maps. Results from simulation verify the effectiveness of the framework in handling deformable maps in an efficient manner.
International Journal of Micro Air Vehicles | 2009
Lin Chi Mak; Makoto Kumon; Mark Whitty; Jayantha Katupitiya; Tomonari Furukawa
This paper presents Micro Aerial Vehicles (MAVs) and their cooperative systems including Unmanned Ground Vehicles (UGVs) and a Base Station (BS), which were primarily designed for the 1st US-Asian Demonstration and Assessment on Micro-Aerial and Unmanned Ground Vehicle Technology (MAV08). The MAVs are of coaxial design, which imparts mechanical stability both outdoor and indoor while obeying a 30 cm size constraint. They have carbon fibre frames for weight reduction allowing microcontrollers and various sensors to be mounted on-board for tele-operated and waypoint control. The UGVs are similarly equipped to perform their own search and tracking mission but also to support the MAVs by relaying data between the MAVs and the BS when they are out of direct range. The BS monitors the vehicles and their environment and navigates them autonomously or with humans in the loop through the developed GUI. The ability of the MAV in flight was demonstrated by showing continuous hovering. The efficacy of the overall system was validated by autonomously controlling two UGVs for cooperative search.
european conference on mobile robots | 2013
Dylan Campbell; Mark Whitty
Kidnapping occurs when a robot is unaware that it has not correctly ascertained its position. As a result, the global map may be severely deformed and the robot may be unable to perform its function. This paper presents a metric-based technique for real-time kidnap detection that utilises a set of binary classifiers to identify all kidnapping events during the autonomous operation of a mobile robot. In contrast, existing techniques either solve specific cases of kidnapping, such as elevator motion, without addressing the general case or remove dependence on local pose estimation entirely, an inefficient and computationally expensive approach. Four metrics were evaluated and the optimal thresholds for the most suitable metrics were determined, resulting in a combined detector that has a negligible probability of failing to identify kidnapping events and a low false positive rate for both indoor and outdoor environments. While this paper uses metrics specific to 3D point clouds, the approach can be generalised to other forms of data, including visual, providing that two independent ways of estimating pose are available.
Computers and Electronics in Agriculture | 2017
Scarlett Liu; Stephen Cossell; Julie Tang; Gregory Dunn; Mark Whitty
A vision system for automated yield estimation and variation mapping is proposed.The proposed method produces F1-score 0.90 in average over four experimental blocks.The developed shoot detection does not require manual labeling to build a classifier.The developed system only requires low-cost off-the-shelf image collection equipment.The best EL stage for imaging shoots is around EL stage 9 regarding yield estimation. Counting grapevine shoots early in the growing season is critical for adjusting management practices but is challenging to automate due to a range of environmental factors.This paper proposes a completely automatic system for grapevine yield estimation, comprised of robust shoot detection and yield estimation based on shoot counts produced from videos. Experiments were conducted on four vine blocks across two cultivars and trellis systems over two seasons. A novel shoot detection framework is presented, including image processing, feature extraction, unsupervised feature selection and unsupervised learning as a final classification step. Then a procedure for converting shoot counts from videos to yield estimates is introduced.The shoot detection framework accuracy was calculated to be 86.83% with an F1-score of 0.90 across the four experimental blocks. This was shown to be robust in a range of lighting conditions in a commercial vineyard. The absolute predicted yield estimation error of the system when applied to four blocks over two consecutive years ranged from 1.18% to 36.02% when the videos were filmed around E-L stage 9.The developed system has an advantage over traditional PCD mapping techniques in that yield variation maps can be obtained earlier in the season, thereby allowing farmers to adjust their management practices for improved outputs. The unsupervised feature selection algorithm combined with unsupervised learning removed the requirement for any prior training or labeling, greatly enhancing the applicability of the overall framework and allows full automation of shoot mapping on a large scale in vineyards.