Cathy Ennis
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cathy Ennis.
advances in computer entertainment technology | 2013
Keith Anderson; Elisabeth André; Tobias Baur; Sara Bernardini; Mathieu Chollet; Evi Chryssafidou; Ionut Damian; Cathy Ennis; Arjan Egges; Patrick Gebhard; Hazaël Jones; Magalie Ochs; Catherine Pelachaud; Kaska Porayska-Pomsta; Paola Rizzo; Nicolas Sabouret
The TARDIS project aims to build a scenario-based serious-game simulation platform for NEETs and job-inclusion associations that supports social training and coaching in the context of job interviews. This paper presents the general architecture of the TARDIS job interview simulator, and the serious game paradigm that we are developing.
IEEE Computer Graphics and Applications | 2009
Christopher Peters; Cathy Ennis
In a proposed methodology for modeling dynamic crowd scenarios, a video corpus informs the modeling process, after which the resultant animations undergo perception-based evaluation. The aim is to improve the crowds visual plausibility rather than the simulations correctness. A real-life crowd animation system demonstrates the methodologys practical application.
tests and proofs | 2009
Rachel McDonnell; Cathy Ennis; Simon Dobbyn; Carol O'Sullivan
In this article, we investigate human sensitivity to the coordination and timing of conversational body language for virtual characters. First, we captured the full body motions (excluding faces and hands) of three actors conversing about a range of topics, in either a polite (i.e., one person talking at a time) or debate/argument style. Stimuli were then created by applying the motion-captured conversations from the actors to virtual characters. In a 2AFC experiment, participants viewed paired sequences of synchronized and desynchronized conversations and were asked to guess which was the real one. Detection performance was above chance for both conversation styles but more so for the polite conversations, where desynchronization was more noticeable.
Computer Animation and Virtual Worlds | 2012
Cathy Ennis; Carol O'Sullivan
Recent progress in real‐time simulations has led to a higher demand for believability from virtual characters. Background characters are becoming a more integral part of games, with emphasis being placed in particular on interactions between them. Conversing groups can play a significant role in adding plausibility, or a sense of presence, to a real‐time simulation. However, it is not obvious how best to generate and vary these kinds of groups. In this paper, using anthropological standards for interacting distances and formations, we conduct a series of experiments to examine how these parameters inherent in human conversation are perceived for virtual characters. Our results show that, although participants were sensitive to both distance and orientation changes between talkers and listeners in a virtual conversation, they were not as sensitive to anomalous gesturing behaviours across different distances. Copyright
eurographics | 2008
Christopher E. Peters; Cathy Ennis; Rachel McDonnell; Carol O'Sullivan
We describe a work-in-progress evaluating the plausibility of pedestrian orientations. While many studies have focused on creating accurate or fast crowd simulation models for populating virtual cities or other environments, little is known about how humans perceive the characteristics of generated scenes. Our initial study, reported here, consists of an evaluation based on static imagery reconstructed from annotated photographs, where the orientations of individuals have been modified. An important focus in our research is the consideration of the effects of the context of the scene on the evaluation, in terms of nearby individuals, objects and the constraints of the walking zone. This work could prove significant for improving and informing the creation of computer graphics pedestrian models. Our particular aim is to inform level-of-detail models.
motion in games | 2013
Cathy Ennis; Ludovic Hoyet; Arjan Egges; Rachel McDonnell
It has been shown that humans are sensitive to the portrayal of emotions for virtual characters. However, previous work in this area has often examined this sensitivity using extreme examples of facial or body animation. Less is known about how attuned people are at recognizing emotions as they are expressed during conversational communication. In order to determine whether body or facial motion is a better indicator for emotional expression for game characters, we conduct a perceptual experiment using synchronized full-body and facial motion-capture data. We find that people can recognize emotions from either modality alone, but combining facial and body motion is preferable in order to create more expressive characters.
applied perception in graphics and visualization | 2008
Cathy Ennis; Christopher Peters; Carol O'Sullivan
In this paper, we evaluate the effects of position and orientation on the plausibility of pedestrian formations. In a perceptual study we investigated how humans perceive characteristics of virtual crowds in static scenes reconstructed from annotated still images where the orientations and positions of the individuals have been modified. We found that by applying rules based on the contextual information of the scene, such as the type of scene being portrayed, the presence of nearby individuals and objects and the constraints of the walking areas in the scene, we improved the perceived realism of the crowd formations. Results from this study can help in the creation of virtual crowds, such as computer graphics pedestrian models or architectural scenes.
international conference on games and virtual worlds for serious applications | 2011
Carol O'Sullivan; Cathy Ennis
Creating realistic populated virtual environments is a challenge that many graphics and VR researchers are currently tackling. There are many interesting problems to solve, such as rendering and animating large and varied crowds efficiently and realistically in believable surroundings, and creating plausible behaviours and sounds for the individual inhabitants and their environment. This is the challenge that we are addressing in the Metropolis project, where our aim is to create the sights and sounds of a convincing crowd of humans and traffic in a complex cityscape. Exploring the perception of virtual humans and crowds is also integral to our approach, through psychophysical experiments with human participants.
motion in games | 2012
Cathy Ennis; Arjan Egges
Virtual characters are a common phenomenon in serious game applications, and can enrich training environments for a range of different purposes. These characters can be used in games that have been developed to help people with learning difficulties. They can also be used to help users develop social skills, such as communication. For social interactions, much communicative information is contained in the body language between the parties involved. We know that humans are sensitive to emotions when they are conveyed on a virtual character and are capable of correctly identifying certain emotions. However, research on emotions and virtual characters tends to focus on a small number of emotions. We wish to create characters for a serious game who will convey a wide range of complex and subtle emotions. This paper presents a first investigation into the use of complex emotional body language for a virtual character. In two experiments, we examine participants’ perception of a range of motion-captured subtle emotions. Results from a pilot shows that participants are better able to recognise complex emotions with negative connotations rather than positive from a virtual character’s body motion. A second experiment aims to identify perceptual overlaps in these emotions, and results obtained motivate further investigation.
acm symposium on applied perception | 2013
Jurgis Pamerneckas; Cathy Ennis; Arjan Egges
This abstract discusses a perceptual study investigating the perception of emotion in conversational body language. Specifically, we wished to determine the parts of the body most important for the identification of emotional expressions. We conducted a perceptual study using motion captured clips of an actor conducting a number of emotional conversations, exhibiting a set of emotions with negative connotations. These motions were represented on a virtual character, showing either the full body, or in the absence of the motion of arms, legs or head. Participants were then asked to indicate the level of presence of both the correct emotion and a corresponding positive pair (e.g., relaxed/stressed). We found participants were able to identify the emotions correctly, but depended strongly on the motions of the arms when doing so.