Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Gibson is active.

Publication


Featured researches published by Simon Gibson.


international symposium on mixed and augmented reality | 2002

Accurate camera calibration for off-line, video-based augmented reality

Simon Gibson; Jonathan Cook; Toby Howard; Roger J. Hubbold; Daniel Oram

Camera tracking is a fundamental requirement for video-based augmented reality applications. The ability to accurately calculate the intrinsic and extrinsic camera parameters for each frame of a video sequence is essential if synthetic objects are to be integrated into the image data in a believable way. In this paper, we present an accurate and reliable approach to camera calibration for off-line video-based augmented reality applications. We first describe an improved feature tracking algorithm, based on the widely used Kanade-Lucas-Tomasi tracker. Estimates of inter-frame camera motion are used to guide tracking, greatly reducing the number of incorrectly tracked features. We then present a robust hierarchical scheme that merges sub-sequences together to form a complete projective reconstruction. Finally, we describe how RANSAC-based random sampling can be applied to the problem of self-calibration, allowing for more reliable upgrades to metric geometry. Results of applying our calibration algorithms are given for both synthetic and real data.


eurographics symposium on rendering techniques | 2000

Interactive Rendering with Real-World Illumination

Simon Gibson; Alan Murta

We propose solutions for seamlessly integrating synthetic objects into background photographs at interactive rates. Recently developed image-based methods are used to capture real-world illumination, and sphere-mapping is used illuminate and render the synthetic objects. We present a new procedure for approximating shadows cast by the real-world illumination using standard hardware-based shadow mapping, and a novel image composition algorithm that uses frame-buffer hardware to correctly overlay the synthetic objects and their shadows onto the background image. We show results of an OpenGL implementation of the algorithm that is capable of rendering complex synthetic objects and their shadows at rates of up to 10 frames per second on an SGI Onyx2.


eurographics | 2003

Rapid shadow generation in real-world lighting environments

Simon Gibson; Jonathan Cook; Toby Howard; Roger J. Hubbold

We propose a new algorithm that uses consumer-level graphics hardware to render shadows cast by synthetic objects and a real lighting environment. This has immediate benefit for interactive Augmented Reality applications, where synthetic objects must be accurately merged with real images. We show how soft shadows cast by direct and indirect illumination sources may be generated and composited into a background image at interactive rates. We describe how the sources of light (and hence shadow) affecting each point in an image can be efficiently encoded using a hierarchical shaft-based subdivision of line-space. This subdivision is then used to determine the sources of light that are occluded by synthetic objects, and we show how the contributions from these sources may be removed from a background image using facilities available on modern graphics hardware. A trade-off may be made at run-time between shadow accuracy and rendering cost, converging towards a result that is subjectively similar to that obtained using ray-tracing based differential rendering algorithms. Examples of the proposed technique are given for a variety of different lighting environments, and the visual fidelity of images generated by our algorithm is compared to both real photographs and synthetic images generated using non-real-time techniques.


Computers & Graphics | 2003

Interactive reconstruction of virtual environments from video sequences

Simon Gibson; Roger J. Hubbold; Jonathan Cook; Toby Howard

Abstract There are many real-world applications of Virtual Reality requiring the construction of complex and accurate three-dimensional models that represent real environments. In this paper, we describe a rapid and robust semi-automatic system that allows such environments to be quickly and easily built from video sequences captured with standard consumer-level digital cameras. The system combines an automatic camera calibration algorithm with an interactive model-building phase, followed by automatic extraction and synthesis of surface textures from frames of the video sequence. The capabilities of the system are illustrated using a variety of example reconstructions.


electronic imaging | 2000

Virtual environments for scene of crime reconstruction and analysis

Toby Howard; Alan Murta; Simon Gibson

This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.


virtual reality software and technology | 2000

Interactive reconstruction of virtual environments from photographs, with application to scene-of-crime analysis

Simon Gibson; Toby Howard

There are many real-world applications of Virtual Reality that require the construction of complex and accurate three-dimensional models, suitably structured for interactive manipulation. In this paper, we present semi-automatic methods that allow such environments to be quickly and easily built from photographs taken with uncalibrated cameras, and illustrate the techniques by application to the real-world problem of scene-of-crime reconstruction.


Computer Graphics Forum | 2001

Flexible Image-Based Photometric Reconstruction using Virtual Light Sources

Simon Gibson; Toby Howard; Roger J. Hubbold

Photometric reconstruction is the process of estimating the illumination and surface reflectance properties of an environment, given a geometric model of the scene and a set of photographs of its surfaces. For mixed‐reality applications, such data is required if synthetic objects are to be correctly illuminated or if synthetic light sources are to be used to re‐light the scene. Current methods of estimating such data are limited in the practical situations in which they can be applied, due to the fact that the geometric and radiometric models of the scene which are provided by the user must be complete, and that the position (and in some cases, intensity) of the light sources must also be specified a‐priori. In this paper, a novel algorithm is presented which overcomes these constraints, and allows photometric data to be reconstructed in less restricted situations. This is achieved through the use of virtual light sources which mimic the effect of direct illumination from unknown luminaires, and indirect illumination reflected off unknown geometry. The intensity of these virtual light sources and the surface material properties are estimated using an iterative algorithm which attempts to match calculated radiance values to those observed in photographs. Results are presented for both synthetic and real scenes that show the quality of the reconstructed data and its use in off‐line mixed‐reality applications.


Presence: Teleoperators & Virtual Environments | 2001

Gnu/Maverik: A Microkernel for Large-Scale Virtual Environments

Roger J. Hubbold; Jonathan Cook; Martin J. Keates; Simon Gibson; Toby Howard; Alan Murta; Adrian J. West; Steve Pettifer

This paper describes a publicly available virtual reality (VR) system, GNU/MAVERIK, which forms one component of a complete VR operating system. We give an overview of the architecture of MAVERIK, and show how it is designed to use application data in an intelligent way, via a simple, yet powerful, callback mechanism that supports an object-oriented framework of classes, objects, and methods. Examples are given to illustrate different uses of the system and typical performance levels.


Presence: Teleoperators & Virtual Environments | 2000

Navigation, Wayfinding, and Place Experience within a Virtual City

Craig Murray; John Bowers; Adrian J. West; Steve Pettifer; Simon Gibson

We report a qualitative study of navigation, wayfinding, and place experience within a virtual city. Cityscape is a virtual environment (VE), partially algorithmically generated and intended to be redolent of the aggregate forms of real cities. In the present study, we observed and interviewed participants during and following exploration of a desktop implementation of Cityscape. A number of emergent themes were identified and are presented and discussed. Observing the interaction with the virtual city suggested a continuous relationship between real and virtual worlds. Participants were seen to attribute real-world properties and expectations to the contents of the virtual world. The implications of these themes for the construction of virtual environments modeled on real-world forms are considered.


IEEE Transactions on Visualization and Computer Graphics | 2000

A perceptually-driven parallel algorithm for efficient radiosity simulation

Simon Gibson; Roger J. Hubbold

The authors describe a novel algorithm for computing view-independent finite element radiosity solutions on distributed shared memory parallel architectures. Our approach is based on the notion of a subiteration being the transfer of energy from a single source to a subset of the scenes receiver patches. By using an efficient queue based scheduling system to process these subiterations, we show how radiosity solutions can be generated without the need for processor synchronization between iterations of the progressive refinement algorithm. The only significant source of interprocessor communication required by our method is for visibility calculations. We also describe a perceptually driven approach to visibility estimation, which employs an efficient volumetric grid structure and attempts to reduce the amount of interprocessor communication by approximating visibility queries between distant patches. Our algorithm also eliminates the need for dynamic load balancing until the end of the solution process and is shown to achieve a superlinear speedup in many situations.

Collaboration


Dive into the Simon Gibson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Toby Howard

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Jonathan Cook

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Adrian J. West

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Alan Murta

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Steve Pettifer

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiao Dongbo

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge