Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony Steed is active.

Publication


Featured researches published by Anthony Steed.


Presence: Teleoperators & Virtual Environments | 1994

Depth of presence in virtual environments

Mel Slater; Martin Usoh; Anthony Steed

This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.


virtual reality software and technology | 1995

Taking steps: the influence of a walking technique on presence in virtual reality

Mel Slater; Martin Usoh; Anthony Steed

This article presents an interactive technique for moving through an immersive virtual environment (or “virtual reality”). The technique is suitable for applications where locomotion is restricted to ground level. The technique is derived from the idea that presence in virtual environments may be enhanced the stronger the match between proprioceptive information from human body movements and sensory feedback from the computer-generated displays. The technique is an attempt to simulate body movements associated with walking. The participant “walks in place” to move through the virtual environment across distances greater than the physical limitations imposed by the electromagnetic tracking devices. A neural network is used to analyze the stream of coordinates from the head-mounted display, to determine whether or not the participant is walking on the spot. Whenever it determines the walking behavior, the participant is moved through virtual space in the direction of his or her gaze. We discuss two experimental studies to assess the impact on presence of this method in comparison to the usual hand-pointing method of navigation in virtual reality. The studies suggest that subjective rating of presence is enhanced by the walking method provided that participants associate subjectively with the virtual body provided in the environment. An application of the technique to climbing steps and ladders is also presented.


Presence: Teleoperators & Virtual Environments | 2000

A Virtual Presence Counter

Mel Slater; Anthony Steed

This paper describes a new measure for presence in immersive virtual environments (VEs) that is based on data that can be unobtrusively obtained during the course of a VE experience. At different times during an experience, a participant will occasionally switch between interpreting the totality of sensory inputs as forming the VE or the real world. The number of transitions from virtual to real is counted, and, using some simplifying assumptions, a probabilistic Markov chain model can be constructed to model these transitions. This model can be used to estimate the equilibrium probability of being present in the VE. This technique was applied in the context of an experiment to assess the relationship between presence and body movement in an immersive VE. The movement was that required by subjects to reach out and touch successive pieces on a three-dimensional chess board. The experiment included twenty subjects, ten of whom had to reach out to touch the chess pieces (the active group) and ten of whom only had to click a handheld mouse button (the control group). The results revealed a significant positive association in the active group between body movement and presence. The results lend support to interaction paradigms that are based on maximizing the match between sensory data and proprioception.


Human Factors | 1998

The Influence of Body Movement on Subjective Presence in Virtual Environments

Mel Slater; Anthony Steed; John D. McCarthy; Francesco Maringelli

We describe an experiment to assess the influence of body movements on presence in a virtual environment. In the experiment 20 participants were to walk through a virtual field of trees and count the trees with diseased leaves. A 2 × 2 between-subjects design was used to assess the influence of two factors on presence: tree height variation and task complexity. The field with greater variation in tree height required participants to bend down and look up more than in the lower variation tree height field. In the higher complexity task participants were told to remember the distribution of diseased trees in the field as well as to count them. The results showed a significant positive association between reported presence and the amount of body movement - in particular, head yaw - and the extent to which participants bent down and stood up. There was also a strong interaction effect between task complexity and gender: Women in the morecomplex task reported a much lower sense of presence than in the simpler task. For applications in which presence is an important requirement, the research in this paper suggests that presence will be increased when interaction techniques are employed that permit the user to engage in whole-body movement.


IEEE Computer Graphics and Applications | 1999

Public speaking in virtual reality: facing an audience of avatars

Mel Slater; David-Paul Pertaub; Anthony Steed

What happens when someone talks in public to an audience they know to be entirely computer generated-to an audience of avatars? If the virtual audience seems attentive, well-behaved, and interested...What happens when someone talks in public to an audience they know to be entirely computer generated-to an audience of avatars? If the virtual audience seems attentive, well-behaved, and interested, if they show positive facial expressions with complimentary actions such as clapping and nodding, does the speaker infer correspondingly positive evaluations of performance and show fewer signs of anxiety? On the other hand, if the audience seems hostile, disinterested, and visibly bored, if they have negative facial expressions and exhibit reactions such as head-shaking, loud yawning, turning away, falling asleep, and walking out, does the speaker infer correspondingly negative evaluations of performance and show more signs of anxiety? We set out to study this question during the summer of 1998. We designed a virtual public speaking scenario, followed by an experimental study. We wanted mainly to explore the effectiveness of virtual environments (VEs) in psychotherapy for social phobias. Rather than plunge straight in and design a virtual reality therapy tool, we first tackled the question of whether real peoples emotional responses are appropriate to the behavior of the virtual people with whom they may interact. The project used DIVE (Distributive Interactive Virtual Environment) as the basis for constructing a working prototype of a virtual public speaking simulation. We constructed as a Virtual Reality Modeling Language (VRML) model, a virtual seminar room that matched the actual seminar room in which subjects completed their various questionnaires and met with the experimenters.


ACM Transactions on Computer-Human Interaction | 2005

Expected, sensed, and desired: A framework for designing sensing-based interaction

Steve Benford; Holger Schnädelbach; Boriana Koleva; Rob Anastasi; Chris Greenhalgh; Tom Rodden; Jonathan Green; Ahmed Ghali; Tony P. Pridmore; Bill Gaver; Andy Boucher; Brendan Walker; Sarah Pennington; Albrecht Schmidt; Hans Gellersen; Anthony Steed

Movements of interfaces can be analyzed in terms of whether they are expected, sensed, and desired. Expected movements are those that users naturally perform; sensed are those that can be measured by a computer; and desired movements are those that are required by a given application. We show how a systematic comparison of expected, sensed, and desired movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: pointing flashlights at walls and posters in order to play sounds; the Augurscope II, a mobile augmented reality interface for outdoors; and the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs. We propose that this approach can help to build a bridge between the analytic and inspirational approaches to design and can help designers meet the challenges raised by a diversification of sensing technologies and interface forms, increased mobility, and an emerging focus on technologies for everyday life.


international conference on computer graphics and interactive techniques | 2012

3D-printing of non-assembly, articulated models

Jacques Calì; Dan Andrei Calian; Cristina Amati; Rebecca Kleinberger; Anthony Steed; Jan Kautz; Tim Weyrich

Additive manufacturing (3D printing) is commonly used to produce physical models for a wide variety of applications, from archaeology to design. While static models are directly supported, it is desirable to also be able to print models with functional articulations, such as a hand with joints and knuckles, without the need for manual assembly of joint components. Apart from having to address limitations inherent to the printing process, this poses a particular challenge for articulated models that should be posable: to allow the model to hold a pose, joints need to exhibit internal friction to withstand gravity, without their parts fusing during 3D printing. This has not been possible with previous printable joint designs. In this paper, we propose a method for converting 3D models into printable, functional, non-assembly models with internal friction. To this end, we have designed an intuitive work-flow that takes an appropriately rigged 3D model, automatically fits novel 3D-printable and posable joints, and provides an interface for specifying rotational constraints. We show a number of results for different articulated models, demonstrating the effectiveness of our method.


Computers & Graphics | 2001

Collaborating in networked immersive spaces: as good as being there together?

Ralph Schroeder; Anthony Steed; Ann-Sofie Axelsson; Ilona Heldal; Åsa Abelin; Josef Wideström; Alexander Nilsson; Mel Slater

Abstract In this paper we present the results of a trial in which two participants collaborated on a puzzle-solving task in networked virtual environments. The task was a Rubiks cube type puzzle, and this meant that the two participants had to interact with the space and with each other very intensively—and they did this successfully despite the limitation of the networked situation. We compare collaboration in networked immersive projection technology (IPTs) systems with previous results concerning collaboration in an IPT system linked with a desktop computer, and also with collaboration on the same task in the real world. Our findings show that the task performance in networked IPTs and in the real scenario are very similar to each other—whereas IPT-to-desktop performance is much poorer. Results about participants’ experience of ‘presence’, ‘co-presence’ and collaboration shed further light on these findings.


intelligent virtual agents | 2007

Spatial Social Behavior in Second Life

Doron Friedman; Anthony Steed; Mel Slater

We have developed software bots that inhabit the popular online social environment SecondLife (SL). Our bots can wander around, collect data, engage in simple interactions, and carry out simple automated experiments. In this paper we use our bots to study spatial social behavior. We found an indication that SL users display distinct spatial behavior when interacting with other users. In addition, in an automated experiment carried out by our bot, we found that users, when their avatars were approached by our bot, tended to respond by moving their avatar, further indicating the significance of proxemics in SL.


eurographics | 2006

Building Expression into Virtual Characters

Vinoba Vinayagamoorthy; Marco Gillies; Anthony Steed; Emmanuel Tanguy; Xueni Pan; Celine Loscos; Mel Slater

Virtual characters are an important part of many 3D graphical simulations. In entertainment or training applications, virtual characters might be one of the main mechanisms for creating and developing content and scenarios. In such applications the user may need to interact with a number of different characters that need to invoke specific responses in the user, so that the user interprets the scenario in the way that the designer intended. Whilst representations of virtual characters have come a long way in recent years, interactive virtual characters tend to be a bit “wooden” with respect to their perceived behaviour. In this STAR we give an overview of work on expressive virtual characters. In particular, we assume that a virtual character representation is already available, and we describe a variety of models and methods that are used to give the characters more “depth” so that they are less wooden and more plausible. We cover models of individual characters’ emotion and personality, models of interpersonal behaviour and methods for generating expression.

Collaboration


Dive into the Anthony Steed's collaboration.

Top Co-Authors

Avatar

Mel Slater

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

William Steptoe

University College London

View shared research outputs
Top Co-Authors

Avatar

Martin Usoh

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oyewole Oyekoya

University College London

View shared research outputs
Top Co-Authors

Avatar

Ilona Heldal

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ye Pan

University College London

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Frécon

Swedish Institute of Computer Science

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge