Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steve Hodges is active.

Publication


Featured researches published by Steve Hodges.


international symposium on mixed and augmented reality | 2011

KinectFusion: Real-time dense surface mapping and tracking

Richard A. Newcombe; Shahram Izadi; Otmar Hilliges; David Molyneaux; David Kim; Andrew J. Davison; Pushmeet Kohi; Jamie Shotton; Steve Hodges; Andrew W. Fitzgibbon

We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.


user interface software and technology | 2011

KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera

Shahram Izadi; David Kim; Otmar Hilliges; David Molyneaux; Richard A. Newcombe; Pushmeet Kohli; Jamie Shotton; Steve Hodges; Dustin Freeman; Andrew J. Davison; Andrew W. Fitzgibbon

KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.


ubiquitous computing | 2006

SenseCam: a retrospective memory aid

Steve Hodges; Lyndsay Williams; Emma Berry; Shahram Izadi; James Srinivasan; Alex Butler; Gavin Smyth; Narinder Kapur; Kenneth R. Wood

This paper presents a novel ubiquitous computing device, the SenseCam, a sensor augmented wearable stills camera. SenseCam is designed to capture a digital record of the wearers day, by recording a series of images and capturing a log of sensor data. We believe that reviewing this information will help the wearer recollect aspects of earlier experiences that have subsequently been forgotten, and thereby form a powerful retrospective memory aid. In this paper we review existing work on memory aids and conclude that there is scope for an improved device. We then report on the design of SenseCam in some detail for the first time. We explain the details of a first in-depth user study of this device, a 12-month clinical trial with a patient suffering from amnesia. The results of this initial evaluation are extremely promising; periodic review of images of events recorded by SenseCam results in significant recall of those events by the patient, which was previously impossible. We end the paper with a discussion of future work, including the application of SenseCam to a wider audience, such as those with neurodegenerative conditions such as Alzheimers disease.


user interface software and technology | 2008

SideSight: multi-"touch" interaction around small devices

Alex Butler; Shahram Izadi; Steve Hodges

Interacting with mobile devices using touch can lead to fingers occluding valuable screen real estate. For the smallest devices, the idea of using a touch-enabled display is almost wholly impractical. In this paper we investigate sensing user touch around small screens like these. We describe a prototype device with infra-red (IR) proximity sensors embedded along each side and capable of detecting the presence and position of fingers in the adjacent regions. When this device is rested on a flat surface, such as a table or desk, the user can carry out single and multi-touch gestures using the space around the device. This gives a larger input space than would otherwise be possible which may be used in conjunction with or instead of on-display touch input. Following a detailed description of our prototype, we discuss some of the interactions it affords.


user interface software and technology | 2007

ThinSight: versatile multi-touch sensing for thin form-factor displays

Steve Hodges; Shahram Izadi; Alex Butler; Alban Rrustemi; Bill Buxton

ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multi-ple fingers placed on or near the display surface. We describe this new hardware in detail, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without degradation of display capability. With our approach, fingertips and hands are clearly identifiable through the display. The approach of optical sensing also opens up the exciting possibility for detecting other physical objects and visual markers through the display, and some initial experiments are described. We also discuss other novel capabilities of our system: interaction at a distance using IR pointing devices, and IR-based communication with other electronic devices through the display. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, thin form-factor making such systems even more deployable. We therefore envisage using ThinSight to capture rich sensor data through the display which can be processed using computer vision techniques to enable both multi-touch and tangible interaction.


Neuropsychological Rehabilitation | 2007

The use of a wearable camera, SenseCam, as a pictorial diary to improve autobiographical memory in a patient with limbic encephalitis: A preliminary report

Emma Berry; Narinder Kapur; Lyndsay Williams; Steve Hodges; Peter Watson; Gavin Smyth; James Srinivasan; Reg Smith; Barbara A. Wilson; Ken Wood

This case study describes the use of a wearable camera, SenseCam, which automatically captures several hundred images per day, to aid autobiographical memory in a patient, Mrs B, with severe memory impairment following limbic encephalitis. By using SenseCam to record personally experienced events we intended that SenseCam pictures would form a pictorial diary to cue and consolidate autobiographical memories. After wearing SenseCam, Mrs B plugged the camera into a PC which uploaded the recorded images and allowed them to be viewed at speed, like watching a movie. In the control condition, a written diary was used to record and remind her of autobiographical events. After viewing SenseCam images, Mrs B was able to recall approximately 80% of recent, personally experienced events. Retention of events was maintained in the long-term, 11 months afterwards, and without viewing SenseCam images for three months. After using the written diary, Mrs B was able to remember around 49% of an event; after one month with no diary readings she had no recall of the same events. We suggest that factors relating to rehearsal/re-consolidation may have enabled SenseCam images to improve Mrs Bs autobiographical recollection.


Memory | 2011

SenseCam: A wearable camera that stimulates and rehabilitates autobiographical memory

Steve Hodges; Emma Berry; Ken Wood

SenseCam is a wearable digital camera that captures an electronic record of the wearers day. It does this by automatically recording a series of still images through its wide-angle lens, and simultaneously capturing a log of data from a number of built-in electronic sensors. Subsequently reviewing a sequence of images appears to provide a powerful autobiographical memory cue. A preliminary evaluation of SenseCam with a patient diagnosed with severe memory impairment was extremely positive; periodic review of images of events recorded by SenseCam resulted in significant recall of those events. Following this, a great deal of work has been undertaken to explore this phenomenon and there are early indications that SenseCam technology may be beneficial to a variety of patients with physical and mental health problems, and is valuable as a tool for investigating normal memory through behavioural and neuroimaging means. Elsewhere, it is becoming clear that judicious use of SenseCam could significantly impact the study of human behaviour. Meanwhile, research and development of the technology itself continues with the aim of providing robust hardware and software tools to meet the needs of clinicians, patients, carers, and researchers. In this paper we describe the history of SenseCam, and the design and operation of the SenseCam device and the associated viewing software, and we discuss some of the ongoing research questions being addressed with the help of SenseCam.


international conference on computer graphics and interactive techniques | 2011

KinectFusion: real-time dynamic 3D surface reconstruction and interaction

Shahram Izadi; Richard A. Newcombe; David Kim; Otmar Hilliges; David Molyneaux; Steve Hodges; Pushmeet Kohli; Jamie Shotton; Andrew J. Davison; Andrew W. Fitzgibbon

We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.


human factors in computing systems | 2012

Shake'n'sense: reducing interference for overlapping structured light depth cameras

D. Alex Butler; Shahram Izadi; Otmar Hilliges; David Molyneaux; Steve Hodges; David Kim

We present a novel yet simple technique that mitigates the interference caused when multiple structured light depth cameras point at the same part of a scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. Our technique requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software. It is therefore simple to replicate. We show qualitative and quantitative results highlighting the improvements made to interfering Kinect depth signals. The camera frame rate is not compromised, which is a problem in approaches that modulate the structured light source. Our technique is non-destructive and does not impact depth values or geometry. We discuss uses for our technique, in particular within instrumented rooms that require simultaneous use of multiple overlapping fixed Kinect cameras to support whole room interactions.


International Journal of Behavioral Nutrition and Physical Activity | 2011

Can we use digital life-log images to investigate active and sedentary travel behaviour? Results from a pilot study.

Paul Kelly; Aiden R. Doherty; Emma Berry; Steve Hodges; Alan M. Batterham; Charlie Foster

BackgroundActive travel such as walking and cycling has potential to increase physical activity levels in sedentary individuals. Motorised car travel is a sedentary behaviour that contributes to carbon emissions. There have been recent calls for technology that will improve our ability to measure these travel behaviours, and in particular evaluate modes and volumes of active versus sedentary travel. The purpose of this pilot study is to investigate the potential efficacy of a new electronic measurement device, a wearable digital camera called SenseCam, in travel research.MethodsParticipants (n = 20) were required to wear the SenseCam device for one full day of travel. The device automatically records approximately 3,600 time-stamped, first-person point-of-view images per day, without any action required by the wearer. Participants also completed a self-report travel diary over the same period for comparison, and were interviewed afterwards to assess user burden and experience.ResultsThere were a total of 105 confirmed journeys in this pilot. The new SenseCam device recorded more journeys than the travel diary (99 vs. 94). Although the two measures demonstrated an acceptable correlation for journey duration (r = 0.92, p < 0.001) self-reported journey duration was over-reported (mean difference 154 s per journey; 95% CI = 89 to 218 s; 95% limits of agreement = 154 ± 598 s (-444 to 752 s)). The device also provided visual data that was used for directed interviews about sources of error.ConclusionsDirect observation of travel behaviour from time-stamped images shows considerable potential in the field of travel research. Journey duration derived from direct observation of travel behaviour from time-stamped images appears to suggest over-reporting of self-reported journey duration.

Collaboration


Dive into the Steve Hodges's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Narinder Kapur

University College London

View shared research outputs
Top Co-Authors

Avatar

Alan Thorne

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge