Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew D. Payne is active.

Publication


Featured researches published by Andrew D. Payne.


Proceedings of SPIE | 2011

Separating true range measurements from multi-path and scattering interference in commercial range cameras

Adrian A. Dorrington; John Peter Godbaz; Michael J. Cree; Andrew D. Payne; Lee V. Streeter

Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.


Measurement Science and Technology | 2007

Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

Adrian A. Dorrington; Michael J. Cree; Andrew D. Payne; Richard M. Conroy; Dale A. Carnegie

We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.


IEEE Transactions on Instrumentation and Measurement | 2011

Analysis of Errors in ToF Range Imaging With Dual-Frequency Modulation

Adrian P. P. Jongenelen; Donald G. Bailey; Andrew D. Payne; Adrian A. Dorrington; Dale A. Carnegie

Range imaging is a technology that utilizes an amplitude-modulated light source and gain-modulated image sensor to simultaneously produce distance and intensity data for all pixels of the sensor. The precision of such a system is, in part, dependent on the modulation frequency. There is typically a tradeoff between precision and maximum unambiguous range. Research has shown that, by taking two measurements at different modulation frequencies, the unambiguous range can be extended without compromising distance precision. In this paper, we present an efficient method for combining two distance measurements obtained using different modulation frequencies. The behavior of the method in the presence of noise has been investigated to determine the expected error rate. In addition, we make use of the signal amplitude to improve the precision of the combined distance measurement. Simulated results compare well to actual data obtained using a system based on the PMD19k range image sensor.


machine vision applications | 2010

Resolving depth-measurement ambiguity with commercially available range imaging cameras

Shane H. McClure; Michael J. Cree; Adrian A. Dorrington; Andrew D. Payne

Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range of the camera. This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is processed to segment the scene into separate objects. The average intensity of each object can then be used to determine which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to objects with very high and very low reflectance, normally a key problem with basic threshold approaches. This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as robot vision where the camera may be moving during captures. The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold approach, but not as high as the multiple frequency method of resolving ambiguity.


machine vision applications | 2008

Video-rate or high-precision: a flexible range imaging camera

Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie; Andrew D. Payne; Richard M. Conroy; John Peter Godbaz; Adrian P. P. Jongenelen

A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The systems frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.


electronic imaging | 2008

Improved linearity using harmonic error rejection in a full-field range imaging system

Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie

Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.


Applied Optics | 2010

Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras

Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie

Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the systems response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).


Proceedings of SPIE | 2009

Characterization of modulated time-of-flight range image sensors

Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie

A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.


computational intelligence for modelling, control and automation | 2008

A Clustering Based Denoising Technique for Range Images of Time of Flight Cameras

Holger Schöner; Bernhard Moser; Adrian A. Dorrington; Andrew D. Payne; Michael J. Cree; Bettina Heise; Frank Bauer

A relatively new technique for measuring the 3D structure of visual scenes is provided by time of flight (TOF) cameras. Reflections of modulated light waves are recorded by a parallel pixel array structure. The time series at each pixel of the resulting image stream is used to estimate travelling time and thus range information. This measuring technique results in pixel dependent noise levels with variances changing over several orders of magnitude dependent on the illumination and material parameters. This makes application of traditional (global) denoising techniques suboptimal. Using free additional information from the camera and a clustering procedure we can get information about which pixels belong to the same object, and what their noise level is, which allows for locally adapted smoothing. To illustrate the success of this method, we compare it with raw camera output and a traditional method for edge preserving smoothing, anisotropic diffusion. We show that this mathematical technique works without individual adaptations on two camera systems with highly different noise characteristics.


image and vision computing new zealand | 2009

Development and characterisation of an easily configurable range imaging system

Adrian P. P. Jongenelen; Dale A. Carnegie; Andrew D. Payne; Adrian A. Dorrington

Range imaging is becoming a popular tool for many applications, with several commercial variants now available. These systems find numerous real world applications such as interactive gaming and the automotive industry. This paper describes the development of a range imaging system employing the PMD-19k sensor from PMD Technologies. One specific advantage of our system is that it is extremely customisable in terms of modulation patterns to act as a platform for further research into time-of-flight range imaging. Experimental results are presented giving an indication of the precision and accuracy of the system, and how modifying certain operating parameters can improve system performance.

Collaboration


Dive into the Andrew D. Payne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dale A. Carnegie

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Adrian P. P. Jongenelen

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge