Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robin Wolff is active.

Publication


Featured researches published by Robin Wolff.


Journal of Computer Applications in Technology | 2007

A review of telecollaboration technologies with respect to closely coupled collaboration

Robin Wolff; David J. Roberts; Anthony Steed; Oliver Otto

Many existing Computer-Supported Cooperative Work (CSCW) systems have two major deficiencies. Firstly, they support the situation of face-to-face meetings poorly. Secondly, many systems deal badly with data sets and workspaces. This paper outlines the requirements for a class of CSCW tools that would focus on supporting closely coupled collaborative activity around shared objects. These requirements include the ability to refer to a common model of the shared space through speech and gesture and for each person to be able to manipulate objects within that space. Based on that, this paper describes the current state-of-the-art in collaborative technologies with a critique of how well they support the required collaborative activities. Essentially, this paper suggests that, as of now, only through immersive Collaborative Virtual Environments (CVE) we are close to being able to achieve the seamless collaboration that exists in a face to face meeting.


ieee virtual reality conference | 2009

Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together

David J. Roberts; Robin Wolff; John Rae; Anthony Steed; Rob Aspin; Moira McIntyre; Adriana Pena; Oyewole Oyekoya; William Steptoe

Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.


international conference on virtual reality | 2006

A review on effective closely-coupled collaboration using immersive CVE's

Oliver Otto; Dave Roberts; Robin Wolff

Many teamwork tasks require a close coupling between the interactions of members of a team. For example, intention and opinion must be communicated, while synchronously manipulating shared artefacts. In face-to-face interaction this communication and manipulation is seamless. Transferring the straightforwardness of such collaboration onto remote located teams is technologically challenging. This survey paper explains why immersive collaborative virtual environments (CVE) suit such tasks. The effectiveness of application of this technology depends on a complex set of factors that determine the efficiency of collaboration. We examine these factors and their interrelationships within the framework of a taxonomy focussed on supporting closely-coupled collaboration using immersive CVEs. In particular we compare the impact of display configurations from distinct aspects within the interaction metaphors: look-in, reach-in and step-in.


ieee international symposium on distributed simulation and real-time applications | 2004

Controlling Consistency within Collaborative Virtual Environments

David J. Roberts; Robin Wolff

Collaborative Virtual Environments (CVE) are a form of telecommunication technology that bring together co-located or remote, participants within a spatial social and information context. Collaboration occurs between people and often around shared objects. Fruitful cooperation is helped by natural and intuitive ways of communicating and sharing, for which responsiveness and consistency are leading factors. Many CVEs maximise local responsiveness through a process of localisation and database replication, increasing responsiveness at the cost of lowering consistency. This is acceptable provided the application does not require the shared manipulation of objects. Those that do, require consistency control that provide sufficient synchronisation, ordering and update control, whilst maximising concurrence and thus the responsiveness of the system. This paper describes the major issues and principles of consistency control and demonstrates how we have applied many of these principles in three CVEs.


collaborative virtual environments | 2004

A study of event traffic during the shared manipulation of objects within a collaborative virtual environment

Robin Wolff; Daniel M. Roberts; Oliver Otto

Event management must balance consistency and responsiveness above the requirements of shared object interaction within a Collaborative Virtual Environment (CVE) system. An understanding of the event traffic during collaborative tasks helps in the design of all aspects of a CVE system. The application, user activity, the display interface, and the network resources, all play a part in determining the characteristics of event management. Linked cubic displays lend themselves well to supporting natural social human communication between remote users. To allow users to communicate naturally and subconsciously, continuous and detailed tracking is necessary. This, however, is hard to balance with the real-time consistency constraints of general shared object interaction. This paper aims to explain these issues through a detailed examination of event traffic produced by a typical CVE, using both immersive and desktop displays, while supporting a variety of collaborative activities. We analyze event traffic during a highly collaborative task requiring various forms of shared object manipulation, including the concurrent manipulation of a shared object. Event sources are categorized and the influence of the form of object sharing as well as the display device interface are detailed. With the presented findings the paper wishes to aid the design of future systems.


Virtual Reality | 2006

Factors influencing flow of object focussed collaboration in collaborative virtual environments

David J. Roberts; Ilona Heldal; Oliver Otto; Robin Wolff

Creativity is believed to be helped by an uncluttered state of mind known as flow and as the trend grows towards less immersive displays to produce an uncluttered workplace, we ask the question “Does immersion matter to the flow of distributed group work?”. The aim of this work is to study the impact of level of immersion on workflow and presence during object focussed distributed group work, and to discuss the relevance of these and other factors to supporting flow and creativity. This is approached through a comprehensive literature survey and significant new results. The study attempts to introduce a breadth of factors and relationships as opposed to proving a hypothesis and thus takes a wide qualitative rather than deep quantitative approach to testing and analysis.


international symposium on visual computing | 2012

An Evaluation of Open Source Physics Engines for Use in Virtual Reality Assembly Simulations

Johannes Hummel; Robin Wolff; Tobias Stein; Andreas Gerndt; Torsten W. Kuhlen

We present a comparison of five freely available physics engines with specific focus on robotic assembly simulation in virtual reality (VR) environments. The aim was to evaluate the engines with generic settings and minimum parameter tweaking. Our benchmarks consider the minimum collision detection time for a large number of objects, restitution characteristics, as well as constraint reliability and body inter-penetration. A further benchmark tests the simulation of a screw and nut mechanism made of rigid-bodies only, without any analytic approximation. Our results show large deviations across the tested engines and reveal benefits and disadvantages that help in selecting the appropriate physics engine for assembly simulations in VR.


distributed simulation and real-time applications | 2008

Communicating Eye Gaze across a Distance without Rooting Participants to the Spot

Robin Wolff; David J. Roberts; Alessio Murgia; Norman Murray; John Rae; William Steptoe; Anthony Steed; Paul M. Sharkey

Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.


virtual reality software and technology | 2004

Supporting social human communication between distributed walk-in displays

David J. Roberts; Robin Wolff; Oliver Otto; Dieter Kranzlmueller; Christoph Anthes; Anthony Steed

Future teleconferencing may enhance communication between remote people by supporting non-verbal communication within an unconstrained space where people can move around and share the manipulation of artefacts. By linking walk-in displays with a Collaborative Virtual Environment (CVE) platform we are able to physically situate a distributed team in a spatially organised social and information context. We have found this to demonstrate unprecedented naturalness in the use of space and body during non-verbal communication and interaction with objects.However, relatively little is known about how people interact through this technology, especially while sharing the manipulation of objects. We observed people engaged in such a task while geographically separated across national boundaries. Our analysis is organised into collaborative scenarios, that each requires a distinct balance of social human communication with consistent shared manipulation of objects.Observational results suggest that walk-in displays do not suffer from some of the important drawbacks of other displays. Previous trials have shown that supporting natural non-verbal communication, along with responsive and consistent shared object manipulation, is hard to achieve. To better understand this problem, we take a close look at how the scenario impacts on the characteristics of event traffic. We conclude by suggesting how various strategies might reduce the consistency problem for particular scenarios.


distributed simulation and real-time applications | 2009

Comparing the End to End Latency of an Immersive Collaborative Environment and a Video Conference

David J. Roberts; Toby Duckworth; Carl M. Moore; Robin Wolff; John O'Hare

Latency in a communication system can result in confusing a conversation through loss of causality as people exchange verbal and non-verbal nuances. This paper compares true end-to-end latencies across an immersive virtual environment and a video conference link using the same approach to measure both. Our approach is to measure end-to-end latency through filming the movements of a participant and their remote representation through synchronised cameras. We also compare contemporary and traditional immersive display and capture devices, whilst also measuring event latency taken from log files. We compare an immersive collaborative virtual environment to a video conference as both attempt to reproduce different aspects of the face-to- face meeting, the former favouring appearance and the latter attention. Results inform not only the designers of both approaches but also set the requirements for future developments for 3D video which has the potential to faithfully reproduce both appearance and attention.

Collaboration


Dive into the Robin Wolff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janki Dodiya

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Steed

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Rae

University of Roehampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge