Adrian A. Dorrington
University of Waikato
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adrian A. Dorrington.
international conference on computer graphics and interactive techniques | 2013
Achuta Kadambi; Refael Whyte; Ayush Bhandari; Lee V. Streeter; Christopher Barsi; Adrian A. Dorrington; Ramesh Raskar
Time of flight cameras produce real-time range maps at a relatively low cost using continuous wave amplitude modulation and demodulation. However, they are geared to measure range (or phase) for a single reflected bounce of light and suffer from systematic errors due to multipath interference. We re-purpose the conventional time of flight device for a new goal: to recover per-pixel sparse time profiles expressed as a sequence of impulses. With this modification, we show that we can not only address multipath interference but also enable new applications such as recovering depth of near-transparent surfaces, looking through diffusers and creating time-profile movies of sweeping light. Our key idea is to formulate the forward amplitude modulated light propagation as a convolution with custom codes, record samples by introducing a simple sequence of electronic time delays, and perform sparse deconvolution to recover sequences of Diracs that correspond to multipath returns. Applications to computer vision include ranging of near-transparent objects and subsurface imaging through diffusers. Our low cost prototype may lead to new insights regarding forward and inverse problems in light transport.
Optics Letters | 2014
Ayush Bhandari; Achuta Kadambi; Refael Whyte; Christopher Barsi; Micha Feigin; Adrian A. Dorrington; Ramesh Raskar
Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase shifts of amplitude-modulated signals. For broad illumination of transparent objects, reflections from multiple scene points can illuminate a given pixel, giving rise to an erroneous depth map. We report here a sparsity-regularized solution that separates K interfering components using multiple modulation frequency measurements. The method maps ToF imaging to the general framework of spectral estimation theory and has applications in improving depth profiles and exploiting multiple scattering.
image and vision computing new zealand | 2008
John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington
Full-field amplitude modulated continuous wave range imagers commonly suffer from the mixed pixel problem. This problem is caused by the integration of light from multiple sources by a single pixel, particularly around the edges of objects, resulting in erroneous range measurements. In this paper we present a method for identifying the intensity and range of multiple return values within each pixel, using the harmonic content of the heterodyne beat waveform. Systems capable of measurements at less than 90 degree phase shifts can apply these methods. Our paper builds on previous simulation based work and uses real range data. The method involves the application of the Levy-Fullagar algorithm and the use of the cyclic nature of the beat waveform to extract the mean noise power. We show that this method enables the separation of multiple range sources and also decreases overall ranging error by 30% in the single return case. Error in the two return case was found to increase substantially as relative intensity of the return decreased.
symposium/workshop on electronic design, test and applications | 2002
Adrian A. Dorrington; Rainer Künnemeyer
Traditionally digital lock-in amplifiers sample the input signal at a rate much higher than the lock-in reference frequency and perform the lock-in algorithm with high-speed processors. We present a small and simple digital lock-in amplifier that uses a 20 bit current integrating analogue-to-digital converter interfaced to a microcontroller. The sample rate is set to twice the reference frequency placing the sampled lock-in signal at the Nyquist frequency allowing the lock-in procedure to be performed with one simple algorithm. This algorithm consists of a spectral inversion technique integrated into a highly optimised low-pass filter. We demonstrate a system with a dynamic range of 103 dB recovering signals up to 85 dB below the interference.
electronic imaging | 2008
Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
Applied Optics | 2010
Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie
Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the systems response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).
Proceedings of SPIE | 2009
Andrew D. Payne; Adrian A. Dorrington; Michael J. Cree; Dale A. Carnegie
A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.
international conference on computational photography | 2014
Achuta Kadambi; Ayush Bhandari; Refael Whyte; Adrian A. Dorrington; Ramesh Raskar
Several computer vision algorithms require a sequence of photographs taken in different illumination conditions, which has spurred development in the area of illumination multiplexing. Various techniques for optimizing the multiplexing process already exist, but are geared toward regular or high speed cameras. Such cameras are fast, but code on the order of milliseconds. In this paper we propose a fusion of two popular contexts, time of flight range cameras and illumination multiplexing. Time of flight cameras are a low cost, consumer-oriented technology capable of acquiring range maps at 30 frames per second. Such cameras have a natural connection to conventional illumination multiplexing strategies as both paradigms rely on the capture of multiple shots and synchronized illumination. While previous work on illumination multiplexing has exploited coding at millisecond intervals, we repurpose sensors that are ordinarily used in time of flight imaging to demultiplex via nanosecond coding strategies.
machine vision applications | 2009
John Peter Godbaz; Michael J. Cree; Adrian A. Dorrington
We present two novel Poisson noise Maximum Likelihood based methods for identifying the individual returns within mixed pixels for Amplitude Modulated Continuous Wave rangers. These methods use the convolutional relationship between signal returns and the recorded data to determine the number, range and intensity of returns within a pixel. One method relies on a continuous piecewise truncated-triangle model for the beat waveform and the other on linear interpolation between translated versions of a sampled waveform. In the single return case both methods provide an improvement in ranging precision over standard Fourier transform based methods and a decrease in overall error in almost every case. We find that it is possible to discriminate between two light sources within a pixel, but local minima and scattered light have a significant impact on ranging precision. Discrimination of two returns requires the ability to take samples at less than 90 phase shifts.
computational intelligence for modelling, control and automation | 2008
Holger Schöner; Bernhard Moser; Adrian A. Dorrington; Andrew D. Payne; Michael J. Cree; Bettina Heise; Frank Bauer
A relatively new technique for measuring the 3D structure of visual scenes is provided by time of flight (TOF) cameras. Reflections of modulated light waves are recorded by a parallel pixel array structure. The time series at each pixel of the resulting image stream is used to estimate travelling time and thus range information. This measuring technique results in pixel dependent noise levels with variances changing over several orders of magnitude dependent on the illumination and material parameters. This makes application of traditional (global) denoising techniques suboptimal. Using free additional information from the camera and a clustering procedure we can get information about which pixels belong to the same object, and what their noise level is, which allows for locally adapted smoothing. To illustrate the success of this method, we compare it with raw camera output and a traditional method for edge preserving smoothing, anisotropic diffusion. We show that this mathematical technique works without individual adaptations on two camera systems with highly different noise characteristics.