Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Molyneaux is active.

Publication


Featured researches published by David Molyneaux.


international symposium on mixed and augmented reality | 2011

KinectFusion: Real-time dense surface mapping and tracking

Richard A. Newcombe; Shahram Izadi; Otmar Hilliges; David Molyneaux; David Kim; Andrew J. Davison; Pushmeet Kohi; Jamie Shotton; Steve Hodges; Andrew W. Fitzgibbon

We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.


user interface software and technology | 2011

KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera

Shahram Izadi; David Kim; Otmar Hilliges; David Molyneaux; Richard A. Newcombe; Pushmeet Kohli; Jamie Shotton; Steve Hodges; Dustin Freeman; Andrew J. Davison; Andrew W. Fitzgibbon

KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.


international conference on computer graphics and interactive techniques | 2011

KinectFusion: real-time dynamic 3D surface reconstruction and interaction

Shahram Izadi; Richard A. Newcombe; David Kim; Otmar Hilliges; David Molyneaux; Steve Hodges; Pushmeet Kohli; Jamie Shotton; Andrew J. Davison; Andrew W. Fitzgibbon

We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.


human factors in computing systems | 2012

Shake'n'sense: reducing interference for overlapping structured light depth cameras

D. Alex Butler; Shahram Izadi; Otmar Hilliges; David Molyneaux; Steve Hodges; David Kim

We present a novel yet simple technique that mitigates the interference caused when multiple structured light depth cameras point at the same part of a scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. Our technique requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software. It is therefore simple to replicate. We show qualitative and quantitative results highlighting the improvements made to interfering Kinect depth signals. The camera frame rate is not compromised, which is a problem in approaches that modulate the structured light source. Our technique is non-destructive and does not impact depth values or geometry. We discuss uses for our technique, in particular within instrumented rooms that require simultaneous use of multiple overlapping fixed Kinect cameras to support whole room interactions.


user interface software and technology | 2012

PICOntrol: using a handheld projector for direct control of physical devices through visible light

Dominik Schmidt; David Molyneaux; Xiang Cao

Todays environments are populated with a growing number of electric devices which come in diverse form factors and provide a plethora of functions. However, rich interaction with these devices can become challenging if they need be controlled from a distance, or are too small to accommodate user interfaces on their own. In this work, we explore PICOntrol, a new approach using an off-the-shelf handheld pico projector for direct control of physical devices through visible light. The projected image serves a dual purpose by simultaneously presenting a visible interface to the user, and transmitting embedded control information to inexpensive sensor units integrated with the devices. To use PICOntrol, the user points the handheld projector at a target device, overlays a projected user interface on its sensor unit, and performs various GUI-style or gestural interactions. PICOntrol enables direct, visible, and rich interactions with various physical devices without requiring central infrastructure. We present our prototype implementation as well as explorations of its interaction space through various application examples.


user interface software and technology | 2011

Vermeer: direct interaction with a 360° viewable 3D display

Alex Butler; Otmar Hilliges; Shahram Izadi; Steve Hodges; David Molyneaux; David Kim; Danny Kong

We present Vermeer, a novel interactive 360° viewable 3D display. Like prior systems in this area, Vermeer provides viewpoint-corrected, stereoscopic 3D graphics to simultaneous users, 360° around the display, without the need for eyewear or other user instrumentation. Our goal is to over-come an issue inherent in these prior systems which - typically due to moving parts - restrict interactions to outside the display volume. Our system leverages a known optical illusion to demonstrate, for the first time, how users can reach into and directly touch 3D objects inside the display volume. Vermeer is intended to be a new enabling technology for interaction, and we therefore describe our hardware implementation in full, focusing on the challenges of combining this optical configuration with an existing approach for creating a 360° viewable 3D display. Initially we demonstrate direct involume interaction by sensing user input with a Kinect camera placed above the display. However, by exploiting the properties of the optical configuration, we also demonstrate novel prototypes for fully integrated input sensing alongside simultaneous display. We conclude by discussing limitations, implications for interaction, and ideas for future work.


ubiquitous computing | 2007

Cooperative augmentation of smart objects with projector-camera systems

David Molyneaux; Hans Gellersen; Gerd Kortuem; Bernt Schiele

In this paper we present a new approach for cooperation between mobile smart objects and projector-camera systems to enable augmentation of the surface of objects with interactive projected displays. We investigate how a smart objects capability for self description and sensing can be used in cooperation with the vision capability of projector-camera systems to help locate, track and display information onto object surfaces in an unconstrained environment. Finally, we develop a framework that can be applied to distributed projector-camera systems, cope with varying levels of description knowledge and different sensors embedded in an object.


tangible and embedded interaction | 2009

Projected interfaces: enabling serendipitous interaction with smart tangible objects

David Molyneaux; Hans Gellersen

The Projected Interfaces architecture enables bi-directional user interaction with smart tangible objects. Smart objects function as both input and output devices simultaneously by cooperating with projector-camera systems to achieve a projected display on their surfaces. Tangible manipulation of the object and camera-based tracking allow interaction directly with the projected display. Such hybrid interfaces benefit both from the flexibility offered by the GUI and the intuitiveness of TUI. In this paper we present the theory behind how to consider interaction for projected interfaces with an architecture design and a proof of concept implementation using an augmented photograph album.


Ksii Transactions on Internet and Information Systems | 2013

Cooperative augmentation of mobile smart objects with projected displays

David Molyneaux; Hans Gellersen; Joe Finney

Sensors, processors, and radios can be integrated invisibly into objects to make them smart and sensitive to user interaction, but feedback is often limited to beeps, blinks, or buzzes. We propose to redress this input-output imbalance by augmentation of smart objects with projected displays, that—unlike physical displays—allow seamless integration with the natural appearance of an object. In this article, we investigate how, in a ubiquitous computing world, smart objects can acquire and control a projection. We consider that projectors and cameras are ubiquitous in the environment, and we develop a novel conception and system that enables smart objects to spontaneously associate with projector-camera systems for cooperative augmentation. Projector-camera systems are conceived as generic, supporting standard computer vision methods for different appearance cues, and smart objects provide a model of their appearance for method selection at runtime, as well as sensor observations to constrain the visual detection process. Cooperative detection results in accurate location and pose of the object, which is then tracked for visual augmentation in response to display requests by the smart object. In this article, we define the conceptual framework underlying our approach; report on computer vision experiments that give original insight into natural appearance-based detection of everyday objects; show how object sensing can be used to increase speed and robustness of visual detection; describe and evaluate a fully implemented system; and describe two smart object applications to illustrate the systems cooperative augmentation process and the embodied interactions it enables with smart objects.


european conference on smart sensing and context | 2008

Vision-Based Detection of Mobile Smart Objects

David Molyneaux; Hans Gellersen; Bernt Schiele

We evaluate an approach for mobile smart objects to cooperate with projector-camera systems to achieve interactive projected displays on their surfaces without changing their appearance or function. Smart objects describe their appearance directly to the projector-camera system, enabling vision-based detection based on their natural appearance. This detection is a significant challenge, as objects differ in appearance and appear at varying distances and orientations with respect to a tracking camera. We investigate four detection approaches representing different appearance cues and contribute three experimental studies analysing the impact on detection performance, firstly of scale and rotation, secondly the combination of multiple appearance cues and thirdly the use of context information from the smart object. We find that the training of appearance descriptions must coincide with the scale and orientations providing the best detection performance, that multiple cues provide a clear performance gain over a single cue and that context sensing masks distractions and clutter, further improving detection performance.

Collaboration


Dive into the David Molyneaux's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge