Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian A. Dorrington is active.

Publication


Featured researches published by Adrian A. Dorrington.


Proceedings of SPIE | 2011

Separating true range measurements from multi-path and scattering interference in commercial range cameras

Adrian A. Dorrington; John Peter Godbaz; Michael J. Cree; Andrew D. Payne; Lee V. Streeter

Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.


Measurement Science and Technology | 2007

Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

Adrian A. Dorrington; Michael J. Cree; Andrew D. Payne; Richard M. Conroy; Dale A. Carnegie

We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.


Proceedings of SPIE | 2012

Closed-form Inverses for the Mixed Pixel/Multipath Interference Problem in AMCW Lidar

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

We present two new closed-form methods for mixed pixel/multipath interference separation in AMCW lidar systems. The mixed pixel/multipath interference problem arises from the violation of a standard range-imaging assumption that each pixel integrates over only a single, discrete backscattering source. While a numerical inversion method has previously been proposed, no close-form inverses have previously been posited. The first new method models reflectivity as a Cauchy distribution over range and uses four measurements at different modulation frequencies to determine the amplitude, phase and reflectivity distribution of up to two component returns within each pixel. The second new method uses attenuation ratios to determine the amplitude and phase of up to two component returns within each pixel. The methods are tested on both simulated and real data and shown to produce a significant improvement in overall error. While this paper focusses on the AMCW mixed pixel/multipath interference problem, the algorithms contained herein have applicability to the reconstruction of a sparse one dimensional signal from an extremely limited number of discrete samples of its Fourier transform.


IEEE Transactions on Instrumentation and Measurement | 2011

Analysis of Errors in ToF Range Imaging With Dual-Frequency Modulation

Adrian P. P. Jongenelen; Donald G. Bailey; Andrew D. Payne; Adrian A. Dorrington; Dale A. Carnegie

Range imaging is a technology that utilizes an amplitude-modulated light source and gain-modulated image sensor to simultaneously produce distance and intensity data for all pixels of the sensor. The precision of such a system is, in part, dependent on the modulation frequency. There is typically a tradeoff between precision and maximum unambiguous range. Research has shown that, by taking two measurements at different modulation frequencies, the unambiguous range can be extended without compromising distance precision. In this paper, we present an efficient method for combining two distance measurements obtained using different modulation frequencies. The behavior of the method in the presence of noise has been investigated to determine the expected error rate. In addition, we make use of the signal amplitude to improve the precision of the combined distance measurement. Simulated results compare well to actual data obtained using a system based on the PMD19k range image sensor.


machine vision applications | 2010

Resolving depth-measurement ambiguity with commercially available range imaging cameras

Shane H. McClure; Michael J. Cree; Adrian A. Dorrington; Andrew D. Payne

Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range of the camera. This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is processed to segment the scene into separate objects. The average intensity of each object can then be used to determine which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to objects with very high and very low reflectance, normally a key problem with basic threshold approaches. This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as robot vision where the camera may be moving during captures. The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold approach, but not as high as the multiple frequency method of resolving ambiguity.


Remote Sensing | 2011

Understanding and ameliorating non-linear phase and amplitude responses in AMCW Lidar

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

Amplitude modulated continuous wave (AMCW) lidar systems commonly suffer from non-linear phase and amplitude responses due to a number of known factors such as aliasing and multipath inteference. In order to produce useful range and intensity information it is necessary to remove these perturbations from the measurements. We review the known causes of non-linearity, namely aliasing, temporal variation in correlation waveform shape and mixed pixels/multipath inteference. We also introduce other sources of non-linearity, including crosstalk, modulation waveform envelope decay and non-circularly symmetric noise statistics, that have been ignored in the literature. An experimental study is conducted to evaluate techniques for mitigation of non-linearity, and it is found that harmonic cancellation provides a significant improvement in phase and amplitude linearity.


machine vision applications | 2008

Video-rate or high-precision: a flexible range imaging camera

Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie; Andrew D. Payne; Richard M. Conroy; John Peter Godbaz; Adrian P. P. Jongenelen

A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The systems frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.


Proceedings of SPIE | 2009

Range imager performance comparison in homodyne and heterodyne operating modes

Richard M. Conroy; Adrian A. Dorrington; Rainer Künnemeyer; Michael J. Cree

Range imaging cameras measure depth simultaneously for every pixel in a given field of view. In most implementations the basic operating principles are the same. A scene is illuminated with an intensity modulated light source and the reflected signal is sampled using a gain-modulated imager. Previously we presented a unique heterodyne range imaging system that employed a bulky and power hungry image intensifier as the high speed gain-modulation mechanism. In this paper we present a new range imager using an internally modulated image sensor that is designed to operate in heterodyne mode, but can also operate in homodyne mode. We discuss homodyne and heterodyne range imaging, and the merits of the various types of hardware used to implement these systems. Following this we describe in detail the hardware and firmware components of our new ranger. We experimentally compare the two operating modes and demonstrate that heterodyne operation is less sensitive to some of the limitations suffered in homodyne mode, resulting in better linearity and ranging precision characteristics. We conclude by showing various qualitative examples that demonstrate the systems three-dimensional measurement performance.


Archive | 2013

Understanding and Ameliorating Mixed Pixels and Multipath Interference in AMCW Lidar

John Peter Godbaz; Adrian A. Dorrington; Michael J. Cree

Amplitude modulated continuous wave (AMCW) lidar systems suffer from significant systematic errors due to mixed pixels and multipath interference. Commercial systems can achieve centimetre precision, however accuracy is typically an order of magnitude worse limiting practical use of these devices. In this chapter the authors address AMCW measurement formation and the causes of mixed pixels and multipath interference. A comprehensive review of the literature is given, from the first reports of mixed pixels in point-scanning AMCW systems, through to the gamut of research over the past two decades into mixed pixels and multipath interference. An overview of a variety of detection and mitigation techniques, including deconvolution based intra-camera scattering reduction, modelling of intra-scene scattering, correlation waveform deconvolution techniques/multifrequency sampling and standard removal approaches, all of which can be applied to range-data from standard commercial cameras is presented. The chapter concludes with comments on future work for better detection and correction of systematic errors in full-field AMCW lidar.


machine vision applications | 2010

Multiple range imaging camera operation with minimal performance impact

Refael Whyte; Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree

Time-of-flight range imaging cameras operate by illuminating a scene with amplitude modulated light and measuring the phase shift of the modulation envelope between the emitted and reflected light. Object distance can then be calculated from this phase measurement. This approach does not work in multiple camera environments as the measured phase is corrupted by the illumination from other cameras. To minimize inaccuracies in multiple camera environments, replacing the traditional cyclic modulation with pseudo-noise amplitude modulation has been previously demonstrated. However, this technique effectively reduced the modulation frequency, therefore decreasing the distance measurement precision (which has a proportional relationship with the modulation frequency). A new modulation scheme using maximum length pseudo-random sequences binary phase encoded onto the existing cyclic amplitude modulation, is presented. The effective modulation frequency therefore remains unchanged, providing range measurements with high precision. The effectiveness of the new modulation scheme was verified using a custom time-of-flight camera based on the PMD19-K2 range imaging sensor. The new pseudo-noise modulation has no significant performance decrease in a single camera environment. In a two camera environment, the precision is only reduced by the increased photon shot noise from the second illumination source.

Collaboration


Dive into the Adrian A. Dorrington's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dale A. Carnegie

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Adrian P. P. Jongenelen

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin M. M. Drayton

Victoria University of Wellington

View shared research outputs
Researchain Logo
Decentralizing Knowledge