Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlo Harvey is active.

Publication


Featured researches published by Carlo Harvey.


eurographics | 2012

Acoustic Rendering and Auditory–Visual Cross-Modal Perception and Interaction

Vedad Hulusic; Carlo Harvey; Kurt Debattista; Nicolas Tsingos; Steve Walker; David M. Howard; Alan Chalmers

In recent years research in the three‐dimensional sound generation field has been primarily focussed upon new applications of spatialized sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. Furthermore, the simulation of light and sound wave propagation is still unachievable at a physically accurate spatio‐temporal quality in real time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers, in fields such as psychology, have been investigating these limitations for several years and have come up with findings which may be exploited in other fields. This paper provides a comprehensive overview of the major techniques for generating spatialized sound and, in addition, discusses perceptual and cross‐modal influences to consider. We also describe current limitations and provide an in‐depth look at the emerging topics in the field.


Computer Graphics Forum | 2017

Multi-Modal Perception for Selective Rendering

Carlo Harvey; Kurt Debattista; Thomas Bashford-Rogers; Alan Chalmers

A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.


international conference on games and virtual worlds for serious applications | 2011

Approximate Visibility Grids for Interactive Indirect Illumination

Thomas Bashford-Rogers; Kurt Debattista; Carlo Harvey; Alan Chalmers

The computation of indirect illumination is fundamental to simulate lighting within a virtual scene correctly and is critical when creating interactive applications, such as games for serious applications. The computation of such illumination is typically prohibitive for interactive or real-time performance if the visibility aspect of the indirect illumination is to be maintained. This paper presents a global illumination system which uses a structure termed the Approximate Visibility Grid (AVG) which enables interactive frame rates for multiple bounce indirect illumination for fully dynamic scenes on the GPU. The AVG is constructed each frame by making efficient use of the rasterisation pipeline. The AVG is then used to compute the visibility aspects of the light transport. We show how the AVG is used to traverse virtual point light sources in the context of instant radiosity, and demonstrate how our novel method enables interactive rendering of virtual scenes that require indirect illumination.


TPCG | 2010

The Effect of Discretised and Fully Converged Spatialised Sound on Directional Attention and Distraction

Carlo Harvey; Steve Walker; Thomas Bashford-Rogers; Kurt Debattista; Alan Chalmers

A major challenge in Virtual Reality (VR) is to be able to provide interactive rates of realism. However this is very computationally demanding and only recently has high-fidelity rendering become close to interactive rates through a series of novel exploitations of visual perception; to render parts of the scene that are not currently being attended by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialised directional sounds, both discrete and converged have on the visual attention of the user with and without an auditory cue present in the scene. We verify the worth of investigating subliminal saccade shifts from directional audio impulses via a pilot study to eye track participant’s free viewing a scene with an audio impulse and an acoustic identifier and also with an audio impulse and no acoustic identifier versus a control. By selecting look zones, we can identify how long users are spending attending a particular area of a scene in these scenarios. This work also investigates whether the effect prevailed, and if so to what extent, with discretised spatialised sound as opposed to a fully converged audio sample. We also present a novel technique for generating interactive discrete acoustic samples from arbitrary geometry. We show that even without an acoustic identifier in the scene, directional sound provides enough of an impulse to guide subliminal saccade shifts and affect perception in such a way that this can be used to guide selective rendering of the scenes.


IEEE Transactions on Human-Machine Systems | 2018

Subjective Evaluation of High-Fidelity Virtual Environments for Driving Simulations

Kurt Debattista; Thomas Bashford-Rogers; Carlo Harvey; Brian Waterfield; Alan Chalmers

Virtual environments (VEs) grant the ability to experience real-world scenarios, such as driving, in a virtual, safe, and reproducible context. However, in order to achieve their full potential, the fidelity of the VE must provide confidence that it replicates the perception of the real-world experience. The computational cost of simulating real-world visuals accurately means that compromises to the fidelity of the visuals must be made. In this paper, a subjective evaluation of driving in a VE at different quality settings is presented. Participants (n = 44) were driven around in the real world and in a purposely built representative VE and the fidelity of the graphics and overall experience at low-, medium-, and high-visual settings were analyzed. Low quality corresponds to the illumination in many current traditional simulators, medium to a higher quality using accurate shadows and reflections, and high to the quality experienced in modern movies and simulations that require hours of computation. Results demonstrate that graphics quality affects the perceived fidelity of the visuals and the overall experience. When judging the overall experience, participants could tell the difference between the lower quality graphics and the rest but did not significantly discriminate between the medium and higher graphical settings. This indicates that future driving simulators should improve the quality, but once the equivalent of the presented medium quality is reached, they may not need to do so significantly.


Entertainment Computing | 2018

First Time User Experiences in mobile games: An evaluation of usability

Lawrence Barnett; Carlo Harvey

Abstract Unlike most other mobile applications, games are driven by their user experience rather than their functionality. No one wishes to play games that are either frustrating or difficult for the wrong reasons. Usability is an integral part of software development and is about maximizing the effectiveness, efficiency and satisfaction of the user. The delicacy of the user experience and heavy competition, it can be argued, render usability more important in games than it is in other software. Immersion and engagement are fundamental and core parts of the enjoyment of computer games, and are both dependent on usability. The focus of this article is around a framework for evaluating the usability of First Time User Experiences (FTUEs). Investigating two specific, off-the-shelf games, we demonstrate that the FTUE can affect an element of usability, namely ‘information quality’, when controlling for the guidance and information presented. Despite this, overall usability is unaffected by the presence of the FTUE.


Computer Graphics Forum | 2018

Audiovisual resource allocation for bimodal virtual environments

Efstratious Doukakis; Kurt Debattista; Carlo Harvey; Thomas Bashford-Rogers; Alan Chalmers

Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.


Computer Graphics Forum | 2018

Olfaction and Selective Rendering

Carlo Harvey; Thomas Bashford-Rogers; Kurt Debattista; Efstratios Doukakis; Alan Chalmers

Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition. Based on the results of this experiment a new type of saliency map in a selective‐rendering pipeline is presented. A second experiment validates this approach, and demonstrates that participants rank images as better quality, when compared to a reference, for the same rendering budget.


international conference on e-learning and games | 2017

An Investigation into Usability and First Time User Experiences Within a Mobile Gaming Context

Lawrence Barnett; Carlo Harvey

With scientific research regarding usability and guidance plus First-Time User Experiences (FTUEs) in video games currently sparse, it is imperative to assist existing and future developers in the field build usable games and effective guidance systems. For the work presented in this publication, research was conducted to investigate the effects of guidance on mobile game usability using two independent groups; featuring two commercial games with and without the presence of a First-Time User Experience. The results show, with significance, that guidance via a FTUE increases one element of usability, ‘information quality’. However, overall usability is not increased by the presence of a FTUE.


CGVC '16 Proceedings of the conferece on Computer Graphics & Visual Computing | 2016

A calibrated olfactory display for high fidelity virtual environments

Amar Dhokia; Efstratious Doukakis; Ali Asadipour; Carlo Harvey; Thomas Bashford-Rogers; Kurt Debattista; Brian Waterfield; Alan Chalmers

Olfactory displays provide a means to reproduce olfactory stimuli for use in virtual environments. Many of the designs produced by researchers, strive to provide stimuli quickly to users and focus on improving usability and portability, yet concentrate less on providing high levels of accuracy to improve the fidelity of odour delivery. This paper provides the guidance to build a reproducible and low cost olfactory display which is able to provide odours to users in a virtual environment at accurate concentration levels that are typical in everyday interactions; this includes ranges of concentration below parts per million and into parts per billion. This paper investigates build concerns of the olfactometer and its proper calibration in order to ensure concentration accuracy of the device. An analysis is provided on the recovery rates of a specific compound after excitation. This analysis provides insight into how this result can be generalisable to the recovery rates of any volatile organic compound, given knowledge of the specific vapour pressure of the compound.

Collaboration


Dive into the Carlo Harvey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge