Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald G. Dansereau is active.

Publication


Featured researches published by Donald G. Dansereau.


IEEE Transactions on Signal Processing | 2015

Five-Dimensional Depth-Velocity Filtering for Enhancing Moving Objects in Light Field Videos

Chamira U. S. Edussooriya; Donald G. Dansereau; Len T. Bruton; P. Agathoklis

Five-dimensional (5-D) light field video (LFV) (also known as plenoptic video) is a more powerful form of representing information of dynamic scenes compared to conventional three-dimensional (3-D) video. In this paper, the 5-D spectrum of an object in an LFV is derived for the important practical case of objects moving with constant velocity and at constant depth. In particular, it is shown that the region of support (ROS) of the 5-D spectrum is a skewed 3-D hyperfan in the 5-D frequency domain, with the degree of skew depending on the velocity and depth of the moving object. Based on this analysis, a 5-D depth-velocity digital filter to enhance moving objects in LFVs is proposed, described and implemented. Further, by means of the commercially available Lytro light-field camera, LFVs of real scenes are generated and used to test and confirm the performance of the 5-D depth-velocity filters for enhancing such objects.


Computer Vision and Image Understanding | 2016

Simple change detection from mobile light field cameras

Donald G. Dansereau; Stefan B. Williams; Peter Corke

We present a framework for solving moving-camera problems with still-camera solutions.Geometry captured by light field cameras is used directly without a 3D scene model.The framework yields a simple, efficient, closed-form solution for change detection.The solution outperforms structure-from-motion methods for commonly occurring scenes.The framework can be generalized to a broad class of computer vision problems. Vision tasks are complicated by the nonuniform apparent motion associated with dynamic cameras in complex 3D environments. We present a framework for light field cameras that simplifies dynamic-camera problems, allowing stationary-camera approaches to be applied. No depth estimation or scene modelling is required - apparent motion is disregarded by exploiting the scene geometry implicitly encoded by the light field. We demonstrate the strength of this framework by applying it to change detection from a moving camera, arriving at the surprising and useful result that change detection can be carried out with a closed-form solution. Its constant runtime, low computational requirements, predictable behaviour, and ease of parallel implementation in hardware including FPGA and GPU make this solution desirable in embedded application, e.g. robotics. We show qualitative and quantitative results for imagery captured using two generations of Lytro camera, with the proposed method generally outperforming both naive pixel-based methods and, for a commonly-occurring class of scene, state-of-the-art structure from motion methods. We quantify the tradeoffs between tolerance to camera motion and sensitivity to change, and the impact of coherent, widespread scene changes. Finally, we discuss generalization of the proposed framework beyond change detection, allowing classically still-camera-only methods to be applied in moving-camera scenarios.


international conference on image processing | 2016

Underwater image descattering and quality assessment

Huimin Lu; Yujie Li; Xing Xu; Li He; Yun Li; Donald G. Dansereau; Seiichi Serikawa

Vision-based underwater navigation and object detection requires robust computer vision algorithms to operate in turbid water. Many conventional methods aimed at improving visibility in low turbid water. In this paper, we propose a novel contrast enhancement to enhance high turbid underwater images using descattering and color correction. The proposed enhancement method removes the scatter and preserves colors. In addition, as a rule to compare the performance of different image enhancement algorithms, a more comprehensive image quality assessment index Qu is proposed. The index combines the benefits of SSIM index and color distance index. Experimental results show that the proposed approach statistically outperforms state-of-the-art general purpose underwater image contrast enhancement algorithms. The experiment also demonstrated that the proposed method performs well for image classification.


ACM Transactions on Graphics | 2017

SpinVR: towards live-streaming 3D virtual reality video

Robert Konrad; Donald G. Dansereau; Aniq Masood; Gordon Wetzstein

Streaming of 360° content is gaining attention as an immersive way to remotely experience live events. However live capture is presently limited to 2D content due to the prohibitive computational cost associated with multi-camera rigs. In this work we present a system that directly captures streaming 3D virtual reality content. Our approach does not suffer from spatial or temporal seams and natively handles phenomena that are challenging for existing systems, including refraction, reflection, transparency and speculars. Vortex natively captures in the omni-directional stereo (ODS) format, which is widely supported by VR displays and streaming pipelines. We identify an important source of distortion inherent to the ODS format, and demonstrate a simple means of correcting it. We include a detailed analysis of the design space, including tradeoffs between noise, frame rate, resolution, and hardware complexity. Processing is minimal, enabling live transmission of immersive, 3D, 360° content. We construct a prototype and demonstrate capture of 360° scenes at up to 8192 X 4096 pixels at 5 fps, and establish the viability of operation up to 32 fps.


Computers & Electrical Engineering | 2017

Introduction to the special section on Artificial Intelligence and Computer Vision

Huimin Lu; Jože Guna; Donald G. Dansereau

The integration of artificial intelligence and computer vision technologies has become a topic of increasing interest for both researchers and developers from academia and industry worldwide. It is foreseeable that artificial intelligence will be the main approach to the next generation computer vision research. The explosion of artificial intelligence algorithms and rapidly growing computational power have significantly expanded the possibility of computer vision. New challenges have also been brought to the vision community. The aim of this special issue is to provide a platform to share up-to-date scientific achievements in this field. From a total of 46 papers submitted to this special section, 10 high-quality articles were selected, resulting in an acceptance rate of 21.7%. Each paper was peer reviewed by three or more experts during the assessment process. The selected articles have exceptional diversity in terms of artificial intelligence and computer vision techniques and applications. They represent the most recent development in both theory and practice. The contributions of these papers are briefly described as follows.


Journal of Field Robotics | 2018

Coregistered Hyperspectral and Stereo Image Seafloor Mapping from an Autonomous Underwater Vehicle

Daniel L. Bongiorno; Mitch Bryson; Tom C. L. Bridge; Donald G. Dansereau; Stefan B. Williams

We present a new method for in situ high-resolution hyperspectral mapping of the seafloor utilizing a spectrometer colocated and coregistered with a high-resolution color stereo camera system onboard an autonomous underwater vehicle (AUV). Hyperspectral imagery data have been used extensively for mapping and distinguishing marine seafloor habitats and organisms from above-water platforms (such as satellites and aircraft), but at low spatial resolutions and at shallow water depths (<10 m). The use of hyperspectral sensing from in-water platforms (such as AUVs) has the potential to provide valuable habitat data in deeper waters and with high spatial resolution. Challenges faced by in-water hyperspectral imaging include difficulties in correcting for water column effects and the spatial registration of point/line-scan hyperspectral sensor measurements. The methods developed in this paper overcome these issues through coregistration with a high spatial resolution, stereo color camera, and precise modeling and compensation of the water column properties that attenuate hyperspectral signals. We integrated two spectrometers into our SeaBED class AUV, and one on-board a support surface vessel to measure and estimate the effects of light passing through the water column. Spatial calibration of the spectrometers/stereo cameras and the synchronized acquisition of both sensors allowed for spatial registration of the resulting hyperspectral reflectance profiles. We demonstrate resulting hyperspectral imagery maps with a spatial resolution of 30 cm over large areas of the seafloor that are not adversely effected by above-water conditions (such as cloud cover) that would typically prevent the use of remote-sensing methods. Results are presented from an AUV mapping survey of a coral reef ecosystem over Pakhoi Bank on the Great Barrier Reef, Queensland, Australia, demonstrating the ability to reconstruct hyperspectral reflectance profiles for a diverse range of abiotic and biotic coverage types including sand, corals, seagrass, and algae. Profiles are then used to automatically classify different coverage types with a 10-fold cross validation accuracy of 91.99% using a linear support vector machine (SVM).


international conference on robotics and automation | 2017

Image-based visual servoing with light field cameras

Dorian Tsai; Donald G. Dansereau; Thierry Peynot; Peter Corke

This paper proposes the first derivation, implementation, and experimental validation of light field image-based visual servoing. Light field image Jacobians are derived based on a compact light field feature representation that is close to the form measured directly by light field cameras. We also enhance feature detection and correspondence by enforcing light field geometry constraints, and directly estimate the image Jacobian without knowledge of point depth. The proposed approach is implemented over a standard visual servoing control loop, and applied to a custom-mirror-based light field camera mounted on a robotic arm. Light field image-based visual servoing is then validated in both simulation and experiment. We show that the proposed method outperforms conventional monocular and stereo image-based visual servoing under field-of-view constraints and occlusions.


ieee aerospace conference | 2016

LunaRoo: Designing a hopping lunar science payload

Jürgen Leitner; William Chamberlain; Donald G. Dansereau; Matthew Dunbabin; Markus Eich; Thierry Peynot; Jonathan M. Roberts; Raymond Russell; Niko Sünderhauf

We describe a hopping science payload solution designed to exploit the Moons lower gravity to leap up to 20m above the surface. The entire solar-powered robot is compact enough to fit within a 10cm cube, whilst providing unique observation and mission capabilities by creating imagery during the hop. The LunaRoo concept is a proposed payload to fly onboard a Google Lunar XPrize entry. Its compact form is specifically designed for lunar exploration and science mission within the constraints given by PTScientists. The core features of LunaRoo are its method of locomotion - hopping like a kangaroo - and its imaging system capable of unique over-the-horizon perception. The payload will serve as a proof of concept, highlighting the benefits of alternative mobility solutions, in particular enabling observation and exploration of terrain not traversable by wheeled robots. in addition providing data for beyond line-of-sight planning and communications for surface assets, extending overall mission capabilities.


arXiv: Robotics | 2016

Mirrored Light Field Video Camera Adapter

Dorian Tsai; Donald G. Dansereau; Steve Martin; Peter Corke


Biomedical Optics Express | 2018

Glare-free retinal imaging using a portable light field fundus camera

Douglas W. Palmer; Thomas Coppin; Krishan Rana; Donald G. Dansereau; Marwan Suheimat; Michelle L. Maynard; David A. Atchison; Jonathan M. Roberts; Ross Crawford; Anjali Jaiprakash

Collaboration


Dive into the Donald G. Dansereau's collaboration.

Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jürgen Leitner

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Dorian Tsai

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan M. Roberts

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thierry Peynot

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huimin Lu

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anjali Jaiprakash

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge