Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rob Aspin is active.

Publication


Featured researches published by Rob Aspin.


ieee virtual reality conference | 2009

Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together

David J. Roberts; Robin Wolff; John Rae; Anthony Steed; Rob Aspin; Moira McIntyre; Adriana Pena; Oyewole Oyekoya; William Steptoe

Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.


Education and Information Technologies | 1998

Collaboration in a Virtual World: Support for Conceptual Learning?

Paul Brna; Rob Aspin

Immersive and semi-immersive Virtual Reality (VR) systems have been used for training in the execution of procedures, in exploring (often static) 3D structures such as architectural designs or geographical features, and in designing buildings or constructing molecules. In a separate line of technological development, the availability of distributed computing capabilities has led to VR systems that provide facilities for groups of students that are geographically separated to learn together in a collaborative manner. However, relatively little work has been done to investigate the advantages of such Collaborative Virtual Environments (CVEs) for learning the underlying conceptual content.A pilot study is described which features several worlds designed as part of the Distributed Extensible Virtual Reality Laboratory (DEVRL). The basic results are presented along with a discussion as to how the research could be moved forward to provide improved support for conceptua l learning. The discussion also raises the issues of how the interfaces design affects conceptual learning; of navigation and conceptual learning; of the role of collaboration in learning; and of the difficulties associated with constructing dynamic VR worlds.


ieee international symposium on distributed simulation and real time applications | 2007

Augmenting the CAVE: An Initial Study into Close Focused, Inward Looking, Exploration in IPT Systems

Rob Aspin; Kien Hoang Le

CAVE-like Immersive Projection Technologies (IPT) systems have long been used to explore complex 3D geometric data sets. While this approach works well for many activity types, there are a class of activities that remain challenging within these environments. When virtual objects are brought into close proximity to the user (denoted by a virtual distance of less than that between the user and the projection surface) the users inability to perceive fine detail and effectively explore around an object from an inward looking perspective becomes apparent. This research presents and evaluates the introduction of an augmented viewing device into a traditional CAVE-like IPT system. This has been accomplished by integrating a tracked tablet PC device that offers a multi modal interface for both user position referenced micro/macroscopic viewing and alternate interaction inputs as part of a distributed augmented CAVE-like IPT system.CAVE-like Immersive Projection Technologies (IPT) systems have long been used to explore complex 3D geometric data sets. While this approach works well for many activity types, there are a class of activities that remain challenging within these environments. When virtual objects are brought into close proximity to the user (denoted by a virtual distance of less than that between the user and the projection surface) the users inability to perceive fine detail and effectively explore around an object from an inward looking perspective becomes apparent. This research presents and evaluates the introduction of an augmented viewing device into a traditional CAVE-like IPT system. This has been accomplished by integrating a tracked tablet PC device that offers a multi modal interface for both user position referenced micro/macroscopic viewing and alternate interaction inputs as part of a distributed augmented CAVE-like IPT system.


IEEE Transactions on Visualization and Computer Graphics | 2013

Estimating the Gaze of a Virtuality Human

David J. Roberts; John Rae; Tobias Duckworth; Carl M. Moore; Rob Aspin

The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.


Simulation | 2008

Bounding Inconsistency Using a Novel Threshold Metric for Dead Reckoning Update Packet Generation

Dave Roberts; Rob Aspin; Damien Marshall; Seamus McLoone; Declan Delaney; Tomas E. Ward

Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. The level of inconsistency arising from the network is proportional to the network delay, and thus a function of bandwidth consumption. Distributed simulation has often used a bandwidth reduction technique known as dead reckoning that combines approximation and estimation in the communication of entity movement to reduce network traffic, and thus improve consistency. However, unless carefully tuned to application and network characteristics, such an approach can introduce more inconsistency than it avoids. The key tuning metric is the distance threshold. This paper questions the suitability of the standard distance threshold as a metric for use in the dead reckoning scheme. Using a model relating entity path curvature and inconsistency, a major performance related limitation of the distance threshold technique is highlighted. We then propose an alternative time—space threshold criterion. The time—space threshold is demonstrated, through simulation, to perform better for low curvature movement. However, it too has a limitation. Based on this, we further propose a novel hybrid scheme. Through simulation and live trials, this scheme is shown to perform well across a range of curvature values, and places bounds on both the spatial and absolute inconsistency arising from dead reckoning.


ieee international conference on cloud computing technology and science | 2015

Cloud Storage Forensic: hubiC as a Case-Study

Ben Blakeley; Chris Cooney; Ali Dehghantanha; Rob Aspin

In todays society where we live in a world of constant connectivity, many people are now looking to cloud services in order to store their files so they can have access to them wherever they are. By using cloud services, users can access files anywhere with an internet connection. However, while cloud storage is convenient, it also presents security risks. From a forensics perspective, the increasing popularity of cloud storage platforms, makes investigation into such exploits much more difficult, especially since many platforms such as mobile devices as well as computers are able to use these services. This paper presents investigation of hubiC as one of popular cloud platforms running on Microsoft Windows 8.1. Remaining artefacts pertaining different usage of hubiC namely upload, download, installation and uninstallation on Microsoft Windows 8.1 are presented.


distributed simulation and real-time applications | 2010

Synchronization of Images from Multiple Cameras to Reconstruct a Moving Human

Carl M. Moore; Toby Duckworth; Rob Aspin; David J. Roberts

What level of synchronization is necessary between images from multiple cameras in order to realistically reconstruct a moving human in 3D? Live reconstruction of the human form, from cameras surrounding the subject, could bridge the gap between video conferencing and Immersive Collaborative Virtual Environments (ICVEs). Video conferencing faithfully reproduces what someone looks like whereas ICVE faithfully reproduces what they look at. While 3D video has been demonstrated in tele-immersion prototypes, the visual/temporal quality has been way below what has become acceptable in video conferencing. Managed synchronization of the acquisition stage is universally used today to ensure multiple images feeding the reconstruction algorithm were taken at the same time. However, this inevitably increases latency and jitter. We measure the temporal characteristics of the capture stage and the impact of inconsistency on the reconstruction algorithm this feeds. This gives us both input and output characteristics for synchronization. From this we determine whether frame synchronization of multiple camera video streams actually needs to be delivered for 3D reconstruction, and if not what level of temporal divergence is acceptable across the captured image frames.


ieee international symposium on distributed simulation and real-time applications | 2005

A Model for Distributed, Co-Located Interaction in Urban Design/Review Visualisation

Rob Aspin; Dave Roberts

Interactive 3D visualization is increasingly used for design/review activities in urban planning and construction. However, as this becomes more wide spread user expectations of the levels of interaction and the functionality expand. Many organizations maintain rich sources of information describing the urban environment and society, supported by a distributed set of services that provide functionality. These resources present an opportunity to create new design/review environments in which groups of stakeholders may collaborate to explore and evaluate new solutions. However, using this distributed functionality and information presents new challenges in defining and managing the content of interactive de sign/review visual environments. This paper presents a novel system for multi-user co-located interaction by which a group of users, viewing a common 3D visualization, are provided with collaborative interaction through a set of distributed personal interfaces. A fundamental part of this activity is the formation of a unified, distributable model that references distributed content, and the available functionality related services provide


IEEE Journal of Selected Topics in Signal Processing | 2015

withyou—An Experimental End-to-End Telepresence System Using Video-Based Reconstruction

David J. Roberts; Allen J. Fairchild; Simon P. Campion; John O'Hare; Carl M. Moore; Rob Aspin; Tobias Duckworth; Paolo Simone Gasparello; Franco Tecchia

Supporting a wide set of linked non-verbal resources remains an evergreen challenge for communication technology, limiting effectiveness in many applications. Interpersonal distance, gaze, posture and facial expression, are interpreted together to manage and add meaning to most conversations. Yet todays technologies favor some above others. This induces confusion in conversations, and is believed to limit both feelings of togetherness and trust, and growth of empathy and rapport. Solving this problem will allow technologies to support most rather than a few interactional scenarios. It is likely to benefit teamwork and team cohesion, distributed decision-making and health and wellbeing applications such as tele-therapy, tele-consultation, and isolation. We introduce withyou, our telepresence research platform. This paper describes the end-to-end system including the psychology of human interaction and how this drives requirements throughout the design and implementation. Our technology approach is to combine the winning characteristics of video conferencing and immersive collaborative virtual environments. This is to allow, for example, people walking past each other to exchange a glance and smile. A systematic explanation of the theory brings together the linked nature of non-verbal communication and how it is influenced by technology. This leads to functional requirements for telepresence, in terms of the balance of visual, spatial and temporal qualities. The first end-to-end description of withyou describes all major processes and the display and capture environment. An unprecedented characterization of our approach is given in terms of the above qualities and what influences them. This leads to non-functional requirements in terms of number and place of cameras and the avoidance of resultant bottlenecks. Proposals are given for improved distribution of processes across networks, computers, and multi-core CPU and GPU. Simple conservative estimation shows that both approaches should meet our requirements. One is implemented and shown to meet minimum and come close to desirable requirements.


conference on computer supported cooperative work | 2011

A GPU based, projective multi-texturing approach to reconstructing the 3D human form for application in tele-presence

Rob Aspin; David J. Roberts

This paper reports a GPU based, projective texturing approach to reconstructing the human form, from multiple images, at a quality and frame rate close to high end video conferencing. The ultimate aim is to support spatially grounded, non-verbal communication through a video based medium. This will, we hope, enable us to balance image quality and update rate to deliver highly realistic and dynamic 3D human representations that offer the visual quality of high end video conferencing with the spatial and temporal characteristics of immersive virtual environments. The output of this will enhance communication by enabling a remote actor to be realistically projected into another persons local space, projected into an extension of the local space, or projected into a shared virtual space. This extends previous work by incorporating texture into the reconstructed form and evaluating the optimized process within our established simulation system.

Collaboration


Dive into the Rob Aspin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge