Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen W. Gilroy is active.

Publication


Featured researches published by Stephen W. Gilroy.


affective computing and intelligent interaction | 2009

PAD-based multimodal affective fusion

Stephen W. Gilroy; Marc Cavazza; Marcus Niiranen; Elisabeth André; Thurid Vogt; Jérôme Urbain; M. Benayoun; Hartmut Seichter; Mark Billinghurst

The study of multimodality is comparatively less developed for affective interfaces than for their traditional counterparts. However, one condition for the successful development of affective interface technologies is the development of frameworks for the real-time multimodal fusion. In this paper, we describe an approach to multimodal affective fusion, which relies on a dimensional model, Pleasure-Arousal-Dominance (PAD) to support the fusion of affective modalities, each input modality being represented as a PAD vector. We describe how this model supports both affective content fusion and temporal fusion within a unified approach. We report results from early user studies which confirm the existence of a correlation between measured affective input and user temperament scores.


advances in computer entertainment technology | 2009

Using affective trajectories to describe states of flow in interactive art

Stephen W. Gilroy; Marc Cavazza; M. Benayoun

Interactive Art installations often integrate sophisticated interaction techniques with visual presentations contributing to a rich user experience. They also provide a privileged environment in which to study user experience by using the same sensing data that support interaction. In this paper, using the affective interface of an Augmented Reality Art installation, we introduce a framework relating real-time emotional data to phenomenological models of user experience, in particular the concept of Flow. We propose to analyse trajectories of affect in a continuous emotional space (Pleasure-Arousal-Dominance), to characterize user experience. Early experiments with several subjects interacting in pairs with the installation support this mapping on the basis of Flow questionnaires. This approach has potential implications for the analysis of user experience across Art and Entertainment applications.


advances in computer entertainment technology | 2008

An affective model of user experience for interactive art

Stephen W. Gilroy; Marc Cavazza; Rémi Chaignon; Satu-Marja Mäkelä; Markus Niranen; Elisabeth André; Thurid Vogt; Jérôme Urbain; Hartmut Seichter; Mark Billinghurst; M. Benayoun

The development of Affective Interface technologies makes it possible to envision a new generation of Digital Arts and Entertainment applications, in which interaction will be based directly on the analysis of user experience. In this paper, we describe an approach to the development of Multimodal Affective Interfaces that supports real-time analysis of user experience as part of an Augmented Reality Art installation. The system relies on a PAD dimensional model of emotion to support the fusion of affective modalities, each input modality being represented as a PAD vector. A further advantage of the PAD model is that it can support a representation of affective responses that relate to aesthetic impressions.


content based multimedia indexing | 2010

Users' response to affective film content: A narrative perspective

Luca Canini; Stephen W. Gilroy; Marc Cavazza; Riccardo Leonardi; Sergio Benini

In this paper, we take a human-centred view to the definition of the affective content of films. We investigate the relationship between users physiological response and multimedia features extracted from the movies, from the perspective of narrative evolution rather than by measuring average values. We found a certain dynamic correlation between arousal, derived from measures of Galvanic Skin Resistance during film viewing, and specific multimedia features in both sound and video domains. Dynamic physiological measurements were also consistent with post-experiment self-assessment by the subjects. These findings suggest that narrative aspects (including staging) are central to the understanding of video affective content, and that direct mapping of video features to emotional models taken from psychology may not capture these phenomena in a straightforward manner.


EHCI-DSVIS'04 Proceedings of the 2004 international conference on Engineering Human Computer Interaction and Interactive Systems | 2004

Using interaction style to match the ubiquitous user interface to the device-to-hand

Stephen W. Gilroy; Michael D. Harrison

Ubiquitous computing requires a multitude of devices to have access to the same services. Abstract specifications of user interfaces are designed to separate the definition of a user interface from that of the underlying service. This paper proposes the incorporation of interaction style into this type of specification. By selecting an appropriate interaction style, an interface can be better matched to the device being used. Specifications that are based upon three different styles have been developed, together with a prototype Style-Based Interaction System (SIS) that utilises these specifications to provide concrete user interfaces for a device. An example weather query service is described, including specifications of user interfaces for this service that use the three different styles as well as example concrete user interfaces that SIS can produce.


augmented human international conference | 2014

Towards emotional regulation through neurofeedback

Marc Cavazza; Fred Charles; Gabor Aranyi; Julie Porteous; Stephen W. Gilroy; Gal Raz; Nimrod Jakob Keynan; Avihay Cohen; Gilan Jackont; Yael Jacob; Eyal Soreq; Ilana Klovatch; Talma Hendler

This paper discusses the potential of Brain-Computer Interfaces based on neurofeedback methods to support emotional control and pursue the goal of emotional control as a mechanism for human augmentation in specific contexts. We illustrate this discussion through two proof-of-concept, fully-implemented experiments: one controlling disposition towards virtual characters using pre-frontal alpha asymmetry, and the other aimed at controlling arousal through activity of the amygdala. In the first instance, these systems are intended to explore augmentation technologies that would be incorporated into various media-based systems rather than permanently affect user behaviour.


intelligent user interfaces | 2012

PINTER: interactive storytelling with physiological input

Stephen W. Gilroy; Julie Porteous; Fred Charles; Marc Cavazza

The dominant interaction paradigm in Interactive Storytelling (IS) systems so far has been active interventions by the user by means of a variety of modalities. PINTER is an IS system that uses physiological inputs - surface electromyography (EMG) and galvanic skin response (GSR) [1] - as a form of passive interaction, opening up the possibility of the use of traditional filmic techniques [2, 3] to implement IS without requiring immersion-breaking interactive responses. The goal of this demonstration is to illustrate the ways in which passive interaction combined with filmic visualisation, dialogue and music, and a plan-based narrative generation approach can form a new basis for an adaptive interactive narrative.


affective computing and intelligent interaction | 2009

Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS

Thurid Vogt; Elisabeth André; Johannes Wagner; Stephen W. Gilroy; Fred Charles; Marc Cavazza

Most emotion recognition systems still rely exclusively on prototypical emotional vocal expressions that may be uniquely assigned to a particular class. In realistic applications, there is, however, no guarantee that emotions are expressed in a prototypical manner. In this paper, we report on challenges that arise when coping with non-prototypical emotions in the context of the CALLAS project and the IRIS network. CALLAS aims to develop interactive art installations that respond to the multimodal emotional input of performers and spectators in real-time. IRIS is concerned with the development of novel technologies for interactive storytelling. Both research initiatives represent an extreme case of non-prototypicality since neither the stimuli nor the emotional responses to stimuli may be considered as prototypical.


virtual reality international conference | 2014

Integrating virtual agents in BCI neurofeedback systems

Marc Cavazza; Fred Charles; Stephen W. Gilroy; Julie Porteous; Gabor Aranyi; Gal Raz; Nimrod Jakob Keynan; Avihay Cohen; Gilan Jackont; Yael Jacob; Eyal Soreq; Ilana Klovatch; Talma Hendler

The recent development of Brain-Computer Interfaces (BCI) to Virtual World has resulted in a growing interest in realistic visual feedback. In this paper, we investigate the potential role of Virtual Agents in neurofeedback systems, which constitute an important paradigm for BCI. We discuss the potential impact of virtual agents on some important determinants of neurofeedback in the context of affective BCI. Throughout the paper, we illustrate our presentation with two fully implemented neurofeedback prototypes featuring virtual agents: the first is an interactive narrative in which the user empathises with the character through neurofeedback; the second recreates a natural environment in which crowd behaviour becomes a metaphor for arousal and the user engages in emotional regulation.


conference on creating, connecting and collaborating through computing | 2011

Supporting Multi-user Participation with Affective Multimodal Fusion

Céline Coutrix; Giulio Jacucci; Ivan Advouevski; Valentin Vervondel; Marc Cavazza; Stephen W. Gilroy; Lorenza Parisi

In this paper, we present an application of affective computing as an art installation designed for group interaction. The Common Touch utilises a large multi-touch display, presenting interactive visualisations of emotive slogans. The artistic brief is to engage participants in the exploration, touching and manipulation of slogans. Participants reveal the missing words of slogans by touching them. The Common Touch utilises several input modalities to build an affective representation of the group interactions: emotional speech recognition, video feature extraction, multi-keyword spotting and touch events. The output of affective fusion is used to refine the selection of slogans presented. We include results of a series of experiments using The Common Touch with 24 subjects in groups of 3 using video analysis, logs and questionnaires for data collection. We describe, through interaction analysis, how users utilised the different modalities, suggesting implications for implementing multimodal aesthetic applications to support multi-user participation: tracking multi-user engagement, coverage of possible affective cues in each modality, multimodality in temporal and event analysis, dramaturgy and performative interaction.

Collaboration


Dive into the Stephen W. Gilroy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eyal Soreq

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Gal Raz

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Ilana Klovatch

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Talma Hendler

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thurid Vogt

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge