Carol O’Sullivan
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carol O’Sullivan.
Experimental Brain Research | 2010
Joanna E. McHugh; Rachel McDonnell; Carol O’Sullivan; Fiona N. Newell
Although the perception of emotion in individuals is an important social skill, very little is known about how emotion is determined from a crowd of individuals. We investigated the perception of emotion in scenes of crowds populated by dynamic characters each expressing an emotion. Facial expressions were masked in these characters and emotion was conveyed using body motion and posture only. We systematically varied the proportion of characters in each scene depicting one of two emotions and participants were required to categorise the overall emotion of the crowd. In Experiment 1, we found that the perception of emotions in a crowd is efficient even with relatively brief exposures of the crowd stimuli. Furthermore, the emotion of a crowd was generally determined by the relative proportions of characters conveying it, although we also found that some emotions dominated perception. In Experiment 2, we found that an increase in crowd size was not associated with a relative decrease in the efficiency with which the emotion was categorised. Our findings suggest that body motion is an important social cue in perceiving the emotion of crowds and have implications for our understanding of how we perceive social information from groups.
Psychological Research-psychologische Forschung | 2018
Niamh A. Merriman; Jan Ondřej; Alicia Rybicki; Eugenie Roudaia; Carol O’Sullivan; Fiona N. Newell
Previous studies have reported an age-related decline in spatial abilities. However, little is known about whether the presence of other, task-irrelevant stimuli during learning further affects spatial cognition in older adults. Here we embedded virtual environments with moving crowds of virtual human pedestrians (Experiment 1) or objects (Experiment 2) whilst participants learned a route and landmarks embedded along that route. In subsequent test trials we presented clips from the learned route and measured spatial memory using three different tasks: a route direction task (i.e. whether the video clip shown was a repetition or retracing of the learned route); an intersection direction task; and a task involving identity of the next landmark encountered. In both experiments, spatial memory was tested in two separate sessions: first following learning of an empty maze environment and second using a different maze which was populated. Older adults performed worse than younger adults in all tasks. Moreover, the presence of crowds during learning resulted in a cost in performance to the spatial tasks relative to the ‘no crowds’ condition in older adults but not in younger adults. In contrast, crowd distractors did not affect performance on the landmark sequence task. There was no age-related cost on performance with object distractors. These results suggest that crowds of human pedestrians selectively capture older adults’ attention during learning. These findings offer further insights into how spatial memory is affected by the ageing process, particularly in scenarios which are representative of real-world situations.
european conference on computer vision | 2016
He Wang; Carol O’Sullivan
Automatically recognizing activities in video is a classic problem in vision and helps to understand behaviors, describe scenes and detect anomalies. We propose an unsupervised method for such purposes. Given video data, we discover recurring activity patterns that appear, peak, wane and disappear over time. By using non-parametric Bayesian methods, we learn coupled spatial and temporal patterns with minimum prior knowledge. To model the temporal changes of patterns, previous works compute Markovian progressions or locally continuous motifs whereas we model time in a globally continuous and non-Markovian way. Visually, the patterns depict flows of major activities. Temporally, each pattern has its own unique appearance-disappearance cycles. To compute compact pattern representations, we also propose a hybrid sampling method. By combining these patterns with detailed environment information, we interpret the semantics of activities and report anomalies. Also, our method fits data better and detects anomalies that were difficult to detect previously.
tests and proofs | 2015
Hanni Kiiski; Ludovic Hoyet; Andy T. Woods; Carol O’Sullivan; Fiona N. Newell
A better understanding of how intentions and traits are perceived from body movements is required for the design of more effective virtual characters that behave in a socially realistic manner. For this purpose, realistic body motion, captured from human movements, is being used more frequently for creating characters with natural animations in games and entertainment. However, it is not always clear for programmers and designers which specific motion parameters best convey specific information such as certain emotions, intentions, or traits. We conducted two experiments to investigate whether the perceived traits of actors could be determined from their body motion, and whether these traits were associated with their perceived intentions. We first recorded body motions from 26 professional actors, who were instructed to move in a “hero”-like or a “villain”-like manner. In the first experiment, 190 participants viewed individual video recordings of these actors and were required to provide ratings to the body motion stimuli along a series of different cognitive dimensions (intentions, attractiveness, dominance, trustworthiness, and distinctiveness). The intersubject ratings across observers were highly consistent, suggesting that social traits are readily determined from body motion. Moreover, correlational analyses between these ratings revealed consistent associations across traits, for example, that perceived “good” intentions were associated with higher ratings of attractiveness and dominance. Experiment 2 was designed to elucidate the qualitative body motion cues that were critical for determining specific intentions and traits from the hero- and villain-like body movements. The results revealed distinct body motions that were readily associated with the perception of either “good” or “bad” intentions. Moreover, regression analyses revealed that these ratings accurately predicted the perception of the portrayed character type. These findings indicate that intentions and social traits are communicated effectively via specific sets of body motion features. Furthermore, these results have important implications for the design of the motion of virtual characters to convey desired social information.
Journal on Multimodal User Interfaces | 2017
Marine Taffou; Jan Ondřej; Carol O’Sullivan; Olivier Warusfel; Isabelle Viaud-Delmon
Judging the size of a group of people is an everyday task, on which many decisions are based. In the present study, we investigated whether judgment of size of different groups of people depended on whether they were presented through the auditory channel, through the visual channel, or through both auditory and visual channels. Groups of humanoids of different sizes (from 8 to 128) were presented within a virtual environment to healthy participants. They had to judge whether there was a lot of people in each group and rate their discomfort in relation to the stimuli with Subjective Units of Distress. Our groups of 96 and 128 virtual humans were judged as crowds regardless of their sensory presentation. The sensory presentation influenced participants’ judgment of virtual human group size ranging from 8 to 48. Moreover, while the quantity judgments in the auditory condition increased linearly with the group size, participants judged the quantity of people in a logarithmic manner in the two other sensory conditions. These results suggest that quantity judgment based on auditory information in a realistic context may often involve implicit arithmetic. Even though our participants were not phobic of crowds, our findings are of interest for the field of virtual reality-based therapy for diverse disorders because they indicate that quantity judgment can potentially be altered in a sensory-specific manner in patients with fear of crowds.
tests and proofs | 2017
Simon Alexanderson; Carol O’Sullivan; Michael Neff; Jonas Beskow
Unlike their human counterparts, artificial agents such as robots and game characters may be deployed with a large variety of face and body configurations. Some have articulated bodies but lack facial features, and others may be talking heads ending at the neck. Generally, they have many fewer degrees of freedom than humans through which they must express themselves, and there will inevitably be a filtering effect when mapping human motion onto the agent. In this article, we investigate filtering effects on three types of embodiments: (a) an agent with a body but no facial features, (b) an agent with a head only, and (c) an agent with a body and a face. We performed a full performance capture of a mime actor enacting short interactions varying the non-verbal expression along five dimensions (e.g., level of frustration and level of certainty) for each of the three embodiments. We performed a crowd-sourced evaluation experiment comparing the video of the actor to the video of an animated robot for the different embodiments and dimensions. Our findings suggest that the face is especially important to pinpoint emotional reactions but is also most volatile to filtering effects. The body motion, on the other hand, had more diverse interpretations but tended to preserve the interpretation after mapping and thus proved to be more resilient to filtering.
Psychological Research-psychologische Forschung | 2017
Marine Taffou; Jan Ondřej; Carol O’Sullivan; Olivier Warusfel; Stéphanie Dubal; Isabelle Viaud-Delmon
Affect, space, and multisensory integration are processes that are closely linked. However, it is unclear whether the spatial location of emotional stimuli interacts with multisensory presentation to influence the emotional experience they induce in the perceiver. In this study, we used the unique advantages of virtual reality techniques to present potentially aversive crowd stimuli embedded in a natural context and to control their display in terms of sensory and spatial presentation. Individuals high in crowdphobic fear navigated in an auditory–visual virtual environment, in which they encountered virtual crowds presented through the visual channel, the auditory channel, or both. They reported the intensity of their negative emotional experience at a far distance and at a close distance from the crowd stimuli. Whereas auditory–visual presentation of close feared stimuli amplified negative feelings, auditory–visual presentation of distant feared stimuli did not amplify negative feelings. This suggests that spatial closeness allows multisensory processes to modulate the intensity of the emotional experience induced by aversive stimuli. Nevertheless, the specific role of auditory stimulation must be investigated to better understand this interaction between multisensory, affective, and spatial representation processes. This phenomenon may serve the implementation of defensive behaviors in response to aversive stimuli that are in position to threaten an individual’s feeling of security.
tests and proofs | 2016
Jan Ondřej; Cathy Ennis; Niamh A. Merriman; Carol O’Sullivan
It is common practice in movies and games to use different actors for the voice and body/face motion of a virtual character. What effect does the combination of these different modalities have on the perception of the viewer? In this article, we conduct a series of experiments to evaluate the distinctiveness and attractiveness of human motions (face and body) and voices. We also create combination characters called FrankenFolks, where we mix and match the voice, body motion, face motion, and avatar of different actors and ask which modality is most dominant when determining distinctiveness and attractiveness or whether the effects are cumulative.
Archive | 1999
David Meaney; Carol O’Sullivan
Computer generated graphical scenes benefit greatly from the inclusion of accurately rendered shadows. Shadows contribute to the realism of a scene, and also provide important information relating to the relative position of objects within a scene. However, shadow generation imposes a significant penalty in terms of the time required to render a scene, especially as the complexity of the scene and the number of polygons needed increases. For this reason, real-time scene generation would benefit from the use of a heuristical approach to the determination of shadow areas. In this paper, we introduce a number of heuristics that may be employed to facilitate real-time animation of objects with shadows at acceptable frame rates. We also present an application designed to investigate the feasibility of rendering shadows at varying levels of detail.
Multisensory Research | 2013
Hanni Kiiski; Ludovic Hoyet; Katja Zibrek; Carol O’Sullivan; Fiona N. Newell
Although humans can infer other people’s intentions from their visual actions (Blakemore and Decety, 2001), it is not well understood how auditory information can influence this process. We investigated whether auditory emotional information can influence the perceived intention of another from their visual body motion. Participants viewed a set of videos which presented point light displays (PLDs) of 10 actors (5 male) who were asked to portray a ‘hero’ or a ‘villain’ character. Based on a 2-AFC design, participants categorised each visual character as having ‘good’ or ‘bad’ intentions. Response accuracy and speed were recorded. Performance on visual-only trials exceeded chance performance suggesting that participants were efficient at judging intentions from PLDs. We then paired auditory vocal stimuli which were associated with either positive (happy) or negative (angry) emotions with each of the PLDs. The auditory stimuli were taken from Belin et al. (2008) and consisted of nonverbal bursts (‘ah’) recorded from 10 actors (5 male). Each vocalisation was randomly paired with a sex-matched PLD (60 PLD-voice combinations). We found that both the categorisation responses and the speed of those responses were affected by the inclusion of the auditory stimuli. Specifically, reaction times were facilitated when the auditory emotion (positive or negative) matched the perceived intentions (good or bad respectively) relative to unisensory conditions. Our findings suggest important interactions between audition and visual actions in perceiving intentions in others and are consistent with previous findings of audio-visual interactions in action-specific visual regions of the brain (e.g., Barraclough et al., 2005).