Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Peter Godbaz is active.

Publication


Featured researches published by John Peter Godbaz.


Proceedings of SPIE | 2011

Separating true range measurements from multi-path and scattering interference in commercial range cameras

Adrian A. Dorrington; John Peter Godbaz; Michael J. Cree; Andrew D. Payne; Lee V. Streeter

Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.


Proceedings of SPIE | 2012

Closed-form Inverses for the Mixed Pixel/Multipath Interference Problem in AMCW Lidar

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

We present two new closed-form methods for mixed pixel/multipath interference separation in AMCW lidar systems. The mixed pixel/multipath interference problem arises from the violation of a standard range-imaging assumption that each pixel integrates over only a single, discrete backscattering source. While a numerical inversion method has previously been proposed, no close-form inverses have previously been posited. The first new method models reflectivity as a Cauchy distribution over range and uses four measurements at different modulation frequencies to determine the amplitude, phase and reflectivity distribution of up to two component returns within each pixel. The second new method uses attenuation ratios to determine the amplitude and phase of up to two component returns within each pixel. The methods are tested on both simulated and real data and shown to produce a significant improvement in overall error. While this paper focusses on the AMCW mixed pixel/multipath interference problem, the algorithms contained herein have applicability to the reconstruction of a sparse one dimensional signal from an extremely limited number of discrete samples of its Fourier transform.


image and vision computing new zealand | 2008

Mixed pixel return separation for a full-field ranger

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

Full-field amplitude modulated continuous wave range imagers commonly suffer from the mixed pixel problem. This problem is caused by the integration of light from multiple sources by a single pixel, particularly around the edges of objects, resulting in erroneous range measurements. In this paper we present a method for identifying the intensity and range of multiple return values within each pixel, using the harmonic content of the heterodyne beat waveform. Systems capable of measurements at less than 90 degree phase shifts can apply these methods. Our paper builds on previous simulation based work and uses real range data. The method involves the application of the Levy-Fullagar algorithm and the use of the cyclic nature of the beat waveform to extract the mean noise power. We show that this method enables the separation of multiple range sources and also decreases overall ranging error by 30% in the single return case. Error in the two return case was found to increase substantially as relative intensity of the return decreased.


Remote Sensing | 2011

Understanding and ameliorating non-linear phase and amplitude responses in AMCW Lidar

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

Amplitude modulated continuous wave (AMCW) lidar systems commonly suffer from non-linear phase and amplitude responses due to a number of known factors such as aliasing and multipath inteference. In order to produce useful range and intensity information it is necessary to remove these perturbations from the measurements. We review the known causes of non-linearity, namely aliasing, temporal variation in correlation waveform shape and mixed pixels/multipath inteference. We also introduce other sources of non-linearity, including crosstalk, modulation waveform envelope decay and non-circularly symmetric noise statistics, that have been ignored in the literature. An experimental study is conducted to evaluate techniques for mitigation of non-linearity, and it is found that harmonic cancellation provides a significant improvement in phase and amplitude linearity.


machine vision applications | 2008

Video-rate or high-precision: a flexible range imaging camera

Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie; Andrew D. Payne; Richard M. Conroy; John Peter Godbaz; Adrian P. P. Jongenelen

A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The systems frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.


machine vision applications | 2009

Multiple return separation for a full-field ranger via continuous waveform modelling

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

We present two novel Poisson noise Maximum Likelihood based methods for identifying the individual returns within mixed pixels for Amplitude Modulated Continuous Wave rangers. These methods use the convolutional relationship between signal returns and the recorded data to determine the number, range and intensity of returns within a pixel. One method relies on a continuous piecewise truncated-triangle model for the beat waveform and the other on linear interpolation between translated versions of a sampled waveform. In the single return case both methods provide an improvement in ranging precision over standard Fourier transform based methods and a decrease in overall error in almost every case. We find that it is possible to discriminate between two light sources within a pixel, but local minima and scattered light have a significant impact on ranging precision. Discrimination of two returns requires the ability to take samples at less than 90 phase shifts.


Archive | 2013

Understanding and Ameliorating Mixed Pixels and Multipath Interference in AMCW Lidar

John Peter Godbaz; Adrian A. Dorrington; Michael J. Cree

Amplitude modulated continuous wave (AMCW) lidar systems suffer from significant systematic errors due to mixed pixels and multipath interference. Commercial systems can achieve centimetre precision, however accuracy is typically an order of magnitude worse limiting practical use of these devices. In this chapter the authors address AMCW measurement formation and the causes of mixed pixels and multipath interference. A comprehensive review of the literature is given, from the first reports of mixed pixels in point-scanning AMCW systems, through to the gamut of research over the past two decades into mixed pixels and multipath interference. An overview of a variety of detection and mitigation techniques, including deconvolution based intra-camera scattering reduction, modelling of intra-scene scattering, correlation waveform deconvolution techniques/multifrequency sampling and standard removal approaches, all of which can be applied to range-data from standard commercial cameras is presented. The chapter concludes with comments on future work for better detection and correction of systematic errors in full-field AMCW lidar.


asian conference on computer vision | 2010

Extending AMCW lidar depth-of-field using a coded aperture

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

By augmenting a high resolution full-field Amplitude Modulated Continuous Wave lidar system with a coded aperture, we show that depth-of-field can be extended using explicit, albeit blurred, range data to determine PSF scale. Because complex domain range-images contain explicit range information, the aperture design is unconstrained by the necessity for range determination by depth-from-defocus. The coded aperture design is shown to improve restoration quality over a circular aperture. A proof-of-concept algorithm using dynamic PSF determination and spatially variant Landweber iterations is developed and using an empirically sampled point spread function is shown to work in cases without serious multipath interference or high phase complexity.


image and vision computing new zealand | 2009

Undue influence: Mitigating range-intensity coupling in AMCW ‘flash’ lidar using scene texture

John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington

We present a new algorithm for mitigating range-intensity coupling caused by scattered light in full-field amplitude modulated continuous wave lidar systems using scene texture. Full-field Lidar works using the time-of-flight principle to measure the range to thousands of points in a scene simultaneously. Mixed pixel are erroneous range measurements caused by pixels integrating light from more than one object at a time. Conventional optics suffer from internal reflections and light scattering which can result in every pixel being mixed with scattered light. This causes erroneous range measurements and range-intensity coupling. By measuring how range changes with intensity over local regions it is possible to determine the phase and intensity of the scattered light without the complex calibration inherent in deconvolution based restoration. The new method is shown to produce a substantial improvement in range image quality. An additional range from texture method is demonstrated which is resistant to scattered light. Variations of the algorithms are tested with and without segmentation — the variant without segmentation is faster, but causes erroneous ranges around the edges of objects which are not present in the segmented algorithm.


image and vision computing new zealand | 2009

Surface projection for mixed pixel restoration

Robert L. Larkins; Michael J. Cree; Adrian A. Dorrington; John Peter Godbaz

Amplitude modulated full-field range-imagers are measurement devices that determine the range to an object simultaneously for each pixel in the scene, but due to the nature of this operation, they commonly suffer from the significant problem of mixed pixels. Once mixed pixels are identified a common procedure is to remove them from the scene; this solution is not ideal as the captured point cloud may become damaged. This paper introduces an alternative approach, in which mixed pixels are projected onto the surface that they should belong. This is achieved by breaking the area around an identified mixed pixel into two classes. A parametric surface is then fitted to the class closest to the mixed pixel, with this mixed pixel then being project onto this surface. The restoration procedure was tested using twelve simulated scenes designed to determine its accuracy and robustness. For these simulated scenes, 93% of the mixed pixels were restored to the surface to which they belong. This mixed pixel restoration process is shown to be accurate and robust for both simulated and real world scenes, thus provides a reliable alternative to removing mixed pixels that can be easily adapted to any mixed pixel detection algorithm.

Collaboration


Dive into the John Peter Godbaz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian P. P. Jongenelen

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Dale A. Carnegie

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge