Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruan Lakemond is active.

Publication


Featured researches published by Ruan Lakemond.


IEEE Transactions on Instrumentation and Measurement | 2012

A Mask-Based Approach for the Geometric Calibration of Thermal-Infrared Cameras

Stephen Vidas; Ruan Lakemond; Simon Denman; Clinton Fookes; Sridha Sridharan; Tim Wark

Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.


advanced video and signal based surveillance | 2009

Affine Adaptation of Local Image Features Using the Hessian Matrix

Ruan Lakemond; Clinton Fookes; Sridha Sridharan

Local feature detectors that make use of derivative based saliency functions to locate points of interest typically require adaptation processes after initial detection in order to achieve scale and affine covariance. Affine adaptation methods have previously been proposed that make use of the second moment matrix to iteratively estimate the affine shape of local image regions. This paper shows that it is possible to use the Hessian matrix to estimate local affine shape in a similar fashion to the second moment matrix. The Hessian matrix requires significantly less computation effort to compute than the second moment matrix, allowing more efficient affine adaptation. It may also be more convenient to use the Hessian matrix, for example, when the Determinant of Hessian detector is used. Experimental evaluation shows that the Hessian matrix is very effective in increasing the efficiency of blob detectors such as the Determinant of Hessian detector, but less effective in combination with the Harris corner detector.


Journal of Mathematical Imaging and Vision | 2012

Hessian-Based Affine Adaptation of Salient Local Image Features

Ruan Lakemond; Sridha Sridharan; Clinton Fookes

Affine covariant local image features are a powerful tool for many applications, including matching and calibrating wide baseline images. Local feature extractors that use a saliency map to locate features require adaptation processes in order to extract affine covariant features. The most effective extractors make use of the second moment matrix (SMM) to iteratively estimate the affine shape of local image regions. This paper shows that the Hessian matrix can be used to estimate local affine shape in a similar fashion to the SMM. The Hessian matrix requires significantly less computation effort than the SMM, allowing more efficient affine adaptation. Experimental results indicate that using the Hessian matrix in conjunction with a feature extractor that selects features in regions with high second order gradients delivers equivalent quality correspondences in less than 17% of the processing time, compared to the same extractor using the SMM.


digital image computing: techniques and applications | 2011

An Exploration of Feature Detector Performance in the Thermal-Infrared Modality

Stephen Vidas; Ruan Lakemond; Simon Denman; Clinton Fookes; Sridha Sridharan; Tim Wark

Thermal-infrared images have superior statistical properties compared with visible-spectrum images in many low-light or no-light scenarios. However, a detailed understanding of feature detector performance in the thermal modality lags behind that of the visible modality. To address this, the first comprehensive study on feature detector performance on thermal-infrared images is conducted. A dataset is presented which explores a total of ten different environments with a range of statistical properties. An investigation is conducted into the effects of several digital and physical image transformations on detector repeatability in these environments. The effect of non-uniformity noise, unique to the thermal modality, is analyzed. The accumulation of sensor non-uniformities beyond the minimum possible level was found to have only a small negative effect. A limiting of feature counts was found to improve the repeatability performance of several detectors. Most other image transformations had predictable effects on feature stability. The best-performing detector varied considerably depending on the nature of the scene and the test.


International Scholarly Research Notices | 2013

Resection-Intersection Bundle Adjustment Revisited

Ruan Lakemond; Clinton Fookes; Sridha Sridharan

Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation; only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach.


Image and Vision Computing | 2013

Evaluation of two-view geometry methods with automatic ground-truth generation

Ruan Lakemond; Clinton Fookes; Sridha Sridharan

A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.


digital image computing: techniques and applications | 2011

Negative Determinant of Hessian Features

Ruan Lakemond; Clinton Fookes; Sridha Sridharan

Local image feature extractors that select local maxima of the determinant of Hessian function have been shown to perform well and are widely used. This paper introduces the negative local minima of the determinant of Hessian function for local feature extraction. The properties and scale-space behaviour of these features are examined and found to be desirable for feature extraction. It is shown how this new feature type can be implemented along with the existing local maxima approach at negligible extra processing cost. Applications to affine covariant feature extraction and sub-pixel precise corner extraction are demonstrated. Experimental results indicate that the new corner detector is more robust to image blur and noise than existing methods. It is also accurate for a broader range of corner geometries. An affine covariant feature extractor is implemented by combining the minima of the determinant of Hessian with existing scale and shape adaptation methods. This extractor can be implemented along side the existing Hessian maxima extractor simply by finding both minima and maxima during the initial extraction stage. The minima features increase the number of correspondences by two to four fold. The additional minima features are very distinct from the maxima features in descriptor space and do not make the matching process more ambiguous.


international conference on signal processing and communication systems | 2012

Efficient real-time face detection for high resolution surveillance applications

Xin Cheng; Ruan Lakemond; Clinton Fookes; Sridha Sridharan

This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640×480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.


Computer Vision and Image Understanding | 2012

Self-calibration of wireless cameras with restricted degrees of freedom

Junbin Liu; Tim Wark; Ruan Lakemond; Sridha Sridharan

This paper presents an approach for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera’s optical center and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. Previous methods for auto-calibration of cameras based on pure rotations fail to work in these two degenerate cases. In addition, our approach includes a modified RANdom SAmple Consensus (RANSAC) algorithm, as well as improved integration of the radial distortion coefficient in the computation of inter-image homographies. We show that these modifications are able to increase the overall efficiency, reliability and accuracy of the homography computation and calibration procedure using both synthetic and real image sequences


digital image computing: techniques and applications | 2011

Practical Improvements to Simultaneous Computation of Multi-view Geometry and Radial Lens Distortion

Ruan Lakemond; Clinton Fookes; Sridha Sridharan

This paper discusses practical issues related to the use of the division model for lens distortion in multi-view geometry computation. A data normalisation strategy is presented, which has been absent from previous discussions on the topic. The convergence properties of the Rectangular Quadric Eigenvalue Problem solution for computing division model distortion are examined. It is shown that the existing method can require more than 1000 iterations when dealing with severe distortion. A method is presented for accelerating convergence to less than 10 iterations for any amount of distortion. The new method is shown to produce equivalent or better results than the existing method with up to two orders of magnitude reduction in iterations. Through detailed simulation it is found that the number of data points used to compute geometry and lens distortion has a strong influence on convergence speed and solution accuracy. It is recommended that more than the minimal number of data points be used when computing geometry using a robust estimator such as RANSAC. Adding two to four extra samples improves the convergence rate and accuracy sufficiently to compensate for the increased number of samples required by the RANSAC process.

Collaboration


Dive into the Ruan Lakemond's collaboration.

Top Co-Authors

Avatar

Sridha Sridharan

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Clinton Fookes

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Simon Denman

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tim Wark

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Stuart Morgan

Australian Institute of Sport

View shared research outputs
Top Co-Authors

Avatar

Stephen Vidas

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

David McKinnon

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Junbin Liu

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Damien O'Rourke

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Daniel Chen

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge