Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roger J. Hubbold is active.

Publication


Featured researches published by Roger J. Hubbold.


ACM Transactions on Computer-Human Interaction | 2007

Modeling the effects of delayed haptic and visual feedback in a collaborative virtual environment

Caroline Jay; Mashhuda Glencross; Roger J. Hubbold

Collaborative virtual environments (CVEs) enable two or more people, separated in the real world, to share the same virtual “space.” They can be used for many purposes, from teleconferencing to training people to perform assembly tasks. Unfortunately, the effectiveness of CVEs is compromised by one major problem: the delay that exists in the networks linking users together. Whilst we have a good understanding, especially in the visual modality, of how users are affected by delayed feedback from their own actions, little research has systematically examined how users are affected by delayed feedback from other people, particularly in environments that support haptic (force) feedback. The current study addresses this issue by quantifying how increasing levels of latency affect visual and haptic feedback in a collaborative target acquisition task. Our results demonstrate that haptic feedback in particular is very sensitive to low levels of delay. Whilst latency affects visual feedback from 50 ms, it impacts on haptic task performance 25 ms earlier, and causes the haptic measures of performance deterioration to rise far more steeply than visual. The “impact-perceive-adapt” model of user performance, which considers the interaction between performance measures, perception of latency, and the breakdown of perception of immediate causality, is proposed as an explanation for the observed pattern of performance.


international symposium on mixed and augmented reality | 2002

Accurate camera calibration for off-line, video-based augmented reality

Simon Gibson; Jonathan Cook; Toby Howard; Roger J. Hubbold; Daniel Oram

Camera tracking is a fundamental requirement for video-based augmented reality applications. The ability to accurately calculate the intrinsic and extrinsic camera parameters for each frame of a video sequence is essential if synthetic objects are to be integrated into the image data in a believable way. In this paper, we present an accurate and reliable approach to camera calibration for off-line video-based augmented reality applications. We first describe an improved feature tracking algorithm, based on the widely used Kanade-Lucas-Tomasi tracker. Estimates of inter-frame camera motion are used to guide tracking, greatly reducing the number of incorrectly tracked features. We then present a robust hierarchical scheme that merges sub-sequences together to form a complete projective reconstruction. Finally, we describe how RANSAC-based random sampling can be applied to the problem of self-calibration, allowing for more reliable upgrades to metric geometry. Results of applying our calibration algorithms are given for both synthetic and real data.


human factors in computing systems | 1998

Navigation guided by artificial force fields

Dongbo Xiao; Roger J. Hubbold

This paper presents a new technique for controlling a user’s navigation in a virtual environment. The approach introduces artificial force fields which act upon the user’s virtual body such that he is guided around obstacles, rather than penetrating or colliding with them. The technique is extended to incorporate gravity into the environment. The problem of negotiating stairs during a walk-through has also been investigated with the new approach. Human subjects were tested in experiments in which they experienced three different kinds of navigation: unconstrained, simple constrained and assisted by force fields. Theresults demonstrate that the force-field technique is an effective approach for effective, comfortable navigation. KEY’VirORDS: 3D interfaces, virtual environments, collision avoidance, navigation, force fields.


eurographics | 2003

Rapid shadow generation in real-world lighting environments

Simon Gibson; Jonathan Cook; Toby Howard; Roger J. Hubbold

We propose a new algorithm that uses consumer-level graphics hardware to render shadows cast by synthetic objects and a real lighting environment. This has immediate benefit for interactive Augmented Reality applications, where synthetic objects must be accurately merged with real images. We show how soft shadows cast by direct and indirect illumination sources may be generated and composited into a background image at interactive rates. We describe how the sources of light (and hence shadow) affecting each point in an image can be efficiently encoded using a hierarchical shaft-based subdivision of line-space. This subdivision is then used to determine the sources of light that are occluded by synthetic objects, and we show how the contributions from these sources may be removed from a background image using facilities available on modern graphics hardware. A trade-off may be made at run-time between shadow accuracy and rendering cost, converging towards a result that is subjectively similar to that obtained using ray-tracing based differential rendering algorithms. Examples of the proposed technique are given for a variety of different lighting environments, and the visual fidelity of images generated by our algorithm is compared to both real photographs and synthetic images generated using non-real-time techniques.


Computer Graphics Forum | 1996

Efficient hierarchical refinement and clustering for radiosity in complex environments

Sarah Gibson; Roger J. Hubbold

Generating accurate radiosity solutions of very complex environments is a time‐consuming problem. We present a rapid hierarchical algorithm that enables such solutions to be computed quickly and efficiently. Firstly, a new technique for bounding the error in the transfer of radiosity between surfaces is discussed, incorporating bounds on form factors, visibility, irradiance, and reflectance over textured surfaces. This technique is then applied to the problem of bounding radiosity transfer between clusters of surfaces, leading to a fast, practical clustering algorithm that builds on the previous work of Sillion1. Volumes are used to represent clusters of small surfaces, but unlike previous algorithms, the orientations of surfaces inside each cluster are accounted for in both the error bound and radiosity transfer. This enables an accurate solution to be generated very efficiently, and results are presented demonstrating the performance of the algorithm on a variety of complex models, one containing almost a quarter of a million initial surfaces.


IEEE Transactions on Visualization and Computer Graphics | 2006

A network architecture supporting consistent rich behavior in collaborative interactive applications

James Marsh; Mashhuda Glencross; Steve Pettifer; Roger J. Hubbold

Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios nor, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics


Computer Graphics Forum | 1997

Perceptually‐Driven Radiosity

Sarah Gibson; Roger J. Hubbold

We present a new approach to radiosity simulation that uses perceptually‐based measures to control the generation of view‐independent radiosity solutions. This enables computational effort to be moved away from areas that are deemed to have a visually insignificant effect on the solutions appearance, into those that are more noticeable. We achieve this with an a‐priori estimate of the real‐world adaptation luminance, and use a tone‐reproduction operator to transform luminance values to display colours during the solution process. The distance between two colours in a perceptually‐uniform colour space is then used as a numerical measure of their perceived difference. We describe an oracle that stops patch refinement once the difference between successive levels of elements becomes perceptually unnoticeable. We also show how the perceived importance of any potential shadow falling across a receiving element can be determined. This is then used to control the number of rays that are cast during visibility computations, giving reductions of almost 93% in the total number of rays required for a solution without any significant loss in image quality. Finally, we discuss how perceptual knowledge can be used to optimise the element mesh for faster interactive display and to save memory during computation.


Computers & Graphics | 2003

Interactive reconstruction of virtual environments from video sequences

Simon Gibson; Roger J. Hubbold; Jonathan Cook; Toby Howard

Abstract There are many real-world applications of Virtual Reality requiring the construction of complex and accurate three-dimensional models that represent real environments. In this paper, we describe a rapid and robust semi-automatic system that allows such environments to be quickly and easily built from video sequences captured with standard consumer-level digital cameras. The system combines an automatic camera calibration algorithm with an interactive model-building phase, followed by automatic extraction and synthesis of surface textures from frames of the video sequence. The capabilities of the system are illustrated using a variety of example reconstructions.


Virtual Reality Systems | 1993

AVIARY – A Generic Virtual Reality Interface for Real Applications

Adrian J. West; Toby Howard; Roger J. Hubbold; Alan Murta; D.N. Snowdon; D.A. Butler

This paper introduces the work of the Advanced Interfaces Group at the University of Manchester, which is applying recent innovations in the field of human–computer interaction to important real-world applications, whose present human–computer interfaces are difficult and unnatural. We begin with an analysis of the problems of existing interfaces, and present an overview of our proposed solution – AVIARY, the generic, hierarchical, extensible virtual world model. We describe a users’ conceptual model for AVIARY, implementation strategies for software and hardware, and the application of the model to specific real-world problems.


Computer Graphics Forum | 2001

Flexible Image-Based Photometric Reconstruction using Virtual Light Sources

Simon Gibson; Toby Howard; Roger J. Hubbold

Photometric reconstruction is the process of estimating the illumination and surface reflectance properties of an environment, given a geometric model of the scene and a set of photographs of its surfaces. For mixed‐reality applications, such data is required if synthetic objects are to be correctly illuminated or if synthetic light sources are to be used to re‐light the scene. Current methods of estimating such data are limited in the practical situations in which they can be applied, due to the fact that the geometric and radiometric models of the scene which are provided by the user must be complete, and that the position (and in some cases, intensity) of the light sources must also be specified a‐priori. In this paper, a novel algorithm is presented which overcomes these constraints, and allows photometric data to be reconstructed in less restricted situations. This is achieved through the use of virtual light sources which mimic the effect of direct illumination from unknown luminaires, and indirect illumination reflected off unknown geometry. The intensity of these virtual light sources and the surface material properties are estimated using an iterative algorithm which attempts to match calculated radiance values to those observed in photographs. Results are presented for both synthetic and real scenes that show the quality of the reconstructed data and its use in off‐line mixed‐reality applications.

Collaboration


Dive into the Roger J. Hubbold's collaboration.

Top Co-Authors

Avatar

Simon Gibson

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Toby Howard

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Cook

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Caroline Jay

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian J. West

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Alan Murta

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

David Hancock

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Steve Pettifer

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge