Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Steptoe is active.

Publication


Featured researches published by William Steptoe.


ieee virtual reality conference | 2009

Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together

David J. Roberts; Robin Wolff; John Rae; Anthony Steed; Rob Aspin; Moira McIntyre; Adriana Pena; Oyewole Oyekoya; William Steptoe

Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.


international symposium on mixed and augmented reality | 2014

Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented Reality

William Steptoe; Simon J. Julier; Anthony Steed

Non-photorealistic rendering (NPR) has been shown as a powerful way to enhance both visual coherence and immersion in augmented reality (AR). However, it has only been evaluated in idealized pre-rendered scenarios with handheld AR devices. In this paper we investigate the use of NPR in an immersive, stereoscopic, wide field-of-view head-mounted video see-through AR display. This is a demanding scenario, which introduces many real-world effects including latency, tracking failures, optical artifacts and mismatches in lighting. We present the AR-Rift, a low-cost video see-through AR system using an Oculus Rift and consumer webcams. We investigate the themes of consistency and immersion as measures of psychophysical non-mediation. An experiment measures discernability and presence in three visual modes: conventional (unprocessed video and graphics), stylized (edge-enhancement) and virtualized (edge-enhancement and color extraction). The stylized mode results in chance-level discernability judgments, indicating successful integration of virtual content to form a visually coherent scene. Conventional and virutalized rendering bias judgments towards correct or incorrect respectively. Presence as it may apply to immersive AR, and which, measured both behaviorally and subjectively, is seen to be similarly high over all three conditions.


human factors in computing systems | 2012

SphereAvatar: a situated display to represent a remote collaborator

Oyewole Oyekoya; William Steptoe; Anthony Steed

An emerging form of telecollaboration utilizes situated or mobile displays at a physical destination to virtually represent remote visitors. An example is a personal telepresence robot, which acts as a physical proxy for a remote visitor, and uses cameras and microphones to capture its surroundings, which are transmitted back to the visitor. We propose the use of spherical displays to represent telepresent visitors at a destination. We suggest that the use of such 360 degree displays in a telepresence system has two key advantages: it is possible to understand the identity of the visitor from any viewpoint; and with suitable graphical representation, it is possible to tell where the visitor is looking from any viewpoint. In this paper, we investigate how to optimally represent a visitor as an avatar on a spherical display by evaluating how varying representations are able to accurately convey head gaze.


distributed simulation and real-time applications | 2008

Communicating Eye Gaze across a Distance without Rooting Participants to the Spot

Robin Wolff; David J. Roberts; Alessio Murgia; Norman Murray; John Rae; William Steptoe; Anthony Steed; Paul M. Sharkey

Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.


Frontiers in Neurology | 2012

A Fully Immersive Set-Up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership

Daniel Perez-Marcos; Massimiliano Solazzi; William Steptoe; Oyewole Oyekoya; Antonio Frisoli; Tim Weyrich; Anthony Steed; Franco Tecchia; Mel Slater; Maria V. Sanchez-Vives

Although telerehabilitation systems represent one of the most technologically appealing clinical solutions for the immediate future, they still present limitations that prevent their standardization. Here we propose an integrated approach that includes three key and novel factors: (a) fully immersive virtual environments, including virtual body representation and ownership; (b) multimodal interaction with remote people and virtual objects including haptic interaction; and (c) a physical representation of the patient at the hospital through embodiment agents (e.g., as a physical robot). The importance of secure and rapid communication between the nodes is also stressed and an example implemented solution is described. Finally, we discuss the proposed approach with reference to the existing literature and systems.


ieee virtual reality conference | 2008

High-Fidelity Avatar Eye-Representation

William Steptoe; Anthony Steed

In collaborative virtual environments, the visual representation of avatars has been shown to be an important determinant of participant behaviour and response. We explored the influence of varying conditions of eye-representation in our high-fidelity avatar by measuring how accurately people can identify the avatars point-of- regard (direction of gaze), together with subjective authenticity assessments of the avatars behaviour and visual representation. The first of two variables investigated was socket-deformation, which is to say that our avatars eyelids, eyebrows and surrounding areas morphed realistically depending on eye-rotation. The second was vergence of our avatars eyes to the exact point-of-regard. Our results suggest that the two variables significantly influence the accuracy of point-of-regard identification. This accuracy is highly dependent on the combination of viewing-angle and the point-of-regard itself. We found that socket-deformation in particular has a highly positive impact on the perceived authenticity of our avatars overall appearance, and when judging just the eyes. However, despite favourable subjective ratings, overall performance during the point-of-regard identification task was actually worse with the highest quality avatar. This provides more evidence that as we move forward to using higher fidelity avatars, there will be a tradeoff between supporting realism of representation and supporting the actual communicative task.


human factors in computing systems | 2014

Comparing flat and spherical displays in a trust scenario in avatar-mediated interaction

Ye Pan; William Steptoe; Anthony Steed

We report on two experiments that investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. By monitoring advice seeking behavior, our first experiment demonstrates that if participants observe an avatar at an oblique viewing angle on a flat display, they are less able to discriminate between expert and non-expert advice than if they observe the avatar face-on. We then introduce a novel spherical display and a ray-traced rendering technique that can display an avatar that can be seen correctly from any viewing direction. We expect that a spherical display has advantages over a flat display because it better supports non-verbal cues, particularly gaze direction, since it presents a clear and undistorted viewing aspect at all angles. Our second experiment compares the spherical display to a flat display. Whilst participants can discriminate expert advice regardless of display, a negative bias towards the flat screen emerges at oblique viewing angles. This result emphasizes the ability of the spherical display to be viewed qualitatively similarly from all angles. Together the experiments demonstrate how trust can be altered depending on how one views the avatar.


Computer Animation and Virtual Worlds | 2010

Eyelid kinematics for virtual characters

William Steptoe; Oyewole Oyekoya; Anthony Steed

Realistic character animation requires elaborate rigging built on top of high quality 3D models. Sophisticated anatomically based rigs are often the choice of visual effect studios where life-like animation of CG characters is the primary objective. However, rigging a character with a muscular-skeletal system is very involving and time-consuming process, even for professionals. Although, there have been recent research efforts to automate either all or some parts of the rigging process, the complexity of anatomically based rigging nonetheless opens up new research challenges. We propose a new method to automate anatomically based rigging that transfers an existing rig of one character to another. The method is based on a data interpolation in the surface and volume domain, where various rigging elements can be transferred between different models. As it only requires a small number of corresponding input feature points, users can produce highly detailed rigs for a variety of desired character with ease. Copyright


distributed simulation and real-time applications | 2008

A Tool for Replay and Analysis of Gaze-Enhanced Multiparty Sessions Captured in Immersive Collaborative Environments

Alessio Murgia; Robin Wolff; William Steptoe; Paul M. Sharkey; David J. Roberts; Estefania Guimaraes; Anthony Steed; John Rae

A desktop tool for replay and analysis of gaze-enhanced multiparty virtual collaborative sessions is described. We linked three CAVETM-like environments, creating a multiparty collaborative virtual space where avatars are animated with 3D gaze as well as head and hand motions in real time. Log files are recorded for subsequent playback and analysis using the proposed software tool. During replaying the user can rotate the viewpoint and navigate in the simulated 3D scene. The playback mechanism relies on multiple distributed log files captured at every site. This structure enables an observer to experience latencies of movement and information transfer for every site as this is important for conversation analysis. Playback uses an event-replay algorithm, modified to allow fast traversal of the scene by selective rendering of nodes, and to simulate fast random access. The toolpsilas analysis module can show each participantpsilas 3D gaze points and areas where gaze has been concentrated.


human factors in computing systems | 2013

Panoinserts: mobile spatial teleconferencing

Fabrizio Pece; William Steptoe; Fabian Wanner; Simon J. Julier; Tim Weyrich; Jan Kautz; Anthony Steed

We present PanoInserts: a novel teleconferencing system that uses smartphone cameras to create a surround representation of meeting places. We take a static panoramic image of a location into which we insert live videos from smartphones. We use a combination of marker- and image-based tracking to position the video inserts within the panorama, and transmit this representation to a remote viewer. We conduct a user study comparing our system with fully-panoramic video and conventional webcam video conferencing for two spatial reasoning tasks. Results indicate that our system performs comparably with fully-panoramic video, and better than webcam video conferencing in tasks that require an accurate surrounding representation of the remote space. We discuss the representational properties and usability of varying video presentations, exploring how they are perceived and how they influence users when performing spatial reasoning tasks.

Collaboration


Dive into the William Steptoe's collaboration.

Top Co-Authors

Avatar

Anthony Steed

University College London

View shared research outputs
Top Co-Authors

Avatar

John Rae

University of Roehampton

View shared research outputs
Top Co-Authors

Avatar

Oyewole Oyekoya

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Weyrich

University College London

View shared research outputs
Top Co-Authors

Avatar

Mel Slater

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabrizio Pece

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge