Carol O'Sullivan
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carol O'Sullivan.
ACM Transactions on Graphics | 2008
Ladislav Kavan; Steven Collins; Jiří Žára; Carol O'Sullivan
Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance.
ACM Transactions on Graphics | 2004
Gareth Bradshaw; Carol O'Sullivan
Hierarchical object representations play an important role in performing efficient collision handling. Many different geometric primitives have been used to construct these representations, which allow areas of interaction to be localized quickly. For time-critical algorithms, there are distinct advantages to using hierarchies of spheres, known as sphere-trees, for object representation. This article presents a novel algorithm for the construction of sphere-trees. The algorithm presented approximates objects, both convex and non-convex, with a higher degree of fit than existing algorithms. In the lower levels of the representations, there is almost an order of magnitude decrease in the number of spheres required to represent the objects to a given accuracy.
interactive 3d graphics and games | 2007
Ladislav Kavan; Steven Collins; Jiří Žára; Carol O'Sullivan
Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this paper, we present a novel GPU-friendly skinning algorithm based on dual quaternions. We show that this approach solves the artifacts of linear blend skinning at minimal additional cost. Upgrading an existing animation system (e.g., in a videogame) from linear to dual quaternion skinning is very easy and has negligible impact on run-time performance.
international conference on computer graphics and interactive techniques | 2008
Rachel McDonnell; Michéal Larkin; Simon Dobbyn; Steven Collins; Carol O'Sullivan
When simulating large crowds, it is inevitable that the models and motions of many virtual characters will be cloned. However, the perceptual impact of this trade-off has never been studied. In this paper, we consider the ways in which an impression of variety can be created and the perceptual consequences of certain design choices. In a series of experiments designed to test peoples perception of variety in crowds, we found that clones of appearance are far easier to detect than motion clones. Furthermore, we established that cloned models can be masked by color variation, random orientation, and motion. Conversely, the perception of cloned motions remains unaffected by the model on which they are displayed. Other factors that influence the ability to detect clones were examined, such as proximity, model type and characteristic motion. Our results provide novel insights and useful thresholds that will assist in creating more realistic, heterogeneous crowds.
ACM Transactions on Graphics | 2001
Carol O'Sullivan; John Dingliana
Level of Detail (LOD) techniques for real-time rendering and related perceptual issues have received a lot of attention in recent years. Researchers have also begun to look at the issue of perceptually adaptive techniques for plausible physical simulations. In this article, we are particularly interested in the problem of realistic collision simulation in scenes where large numbers of objects are colliding and processing must occur in real-time. An interruptible and therefore degradable collision-handling mechanism is used and the perceptual impact of this degradation is explored. We look for ways in which we can optimize the realism of such simulations and describe a series of psychophysical experiments that investigate different factors affecting collision perception, including eccentricity, separation, distractors, causality, and accuracy of physical response. Finally, strategies for incorporating these factors into a perceptually adaptive real-time simulation of large numbers of visually similar objects are presented.
workshop on program comprehension | 2003
Christopher Peters; Carol O'Sullivan
We present a system for the automatic generation of bottom-up visual attention behaviours in virtual humans. Bottom-up attention refers to the way in which the environment solicits ones attention without regard to task-level goals. Our framework is based on the interactions of multiple components: a synthetic vision system for perceiving the virtual world, a model of bottom-up attention for early visual processing of perceived stimuli, a memory system for the storage of previously sensed data and a gaze controller for the generation of resultant behaviours. Our aim is to provide a feeling of presence in inhabited virtual environments by endowing agents with the ability to pay attention to their surroundings.
Computer Graphics Forum | 2010
Ladislav Kavan; Peter-Pike J. Sloan; Carol O'Sullivan
Skinning is a simple yet popular deformation technique combining compact storage with efficient hardware accelerated rendering. While skinned meshes (such as virtual characters) are traditionally created by artists, previous work proposes algorithms to construct skinning automatically from a given vertex animation. However, these methods typically perform well only for a certain class of input sequences and often require long pre‐processing times. We present an algorithm based on iterative coordinate descent optimization which handles arbitrary animations and produces more accurate approximations than previous techniques, while using only standard linear skinning without any modifications or extensions. To overcome the computational complexity associated with the iterative optimization, we work in a suitable linear subspace (obtained by quick approximate dimensionality reduction) and take advantage of the typically very sparse vertex weights. As a result, our method requires about one or two orders of magnitude less pre‐processing time than previous methods.
Computer Graphics Forum | 2000
John Dingliana; Carol O'Sullivan
Interactive simulation is made possible in many applications by simplifying or culling the finer details that would make real‐time performance impossible. This paper examines detail simplification in the specific problem of collision handling for rigid body animation. We present an automated method for calculating consistent collision response at different levels of detail. The mechanism works closely with a system which uses a pre‐computed hierarchical volume model for collision detection.
Computer Graphics Forum | 2002
Carol O'Sullivan; Justine Cassell; Hannes Högni Vilhjálmsson; John Dingliana; Simon Dobbyn; B. McNamee; Christopher Peters; Thanh Giang
Work on levels of detail for human simulation has occurred mainly on a geometrical level, either by reducing the numbers of polygons representing a virtual human, or replacing them with a two‐dimensional imposter. Approaches that reduce the complexity of motions generated have also been proposed. In this paper, we describe ongoing development of a framework for Adaptive Level Of Detail for Human Animation (ALOHA), which incorporates levels of detail for not only geometry and motion, but also includes a complexity gradient for natural behaviour, both conversational and social.
international conference on computer graphics and interactive techniques | 2009
Rachel McDonnell; Michéal Larkin; Benjamín Hernández; Isaac Rudomin; Carol O'Sullivan
Populated virtual environments need to be simulated with as much variety as possible. By identifying the most salient parts of the scene and characters, available resources can be concentrated where they are needed most. In this paper, we investigate which body parts of virtual characters are most looked at in scenes containing duplicate characters or clones. Using an eye-tracking device, we recorded fixations on body parts while participants were asked to indicate whether clones were present or not. We found that the head and upper torso attract the majority of first fixations in a scene and are attended to most. This is true regardless of the orientation, presence or absence of motion, sex, age, size, and clothing style of the character. We developed a selective variation method to exploit this knowledge and perceptually validated our method. We found that selective colour variation is as effective at generating the illusion of variety as full colour variation. We then evaluated the effectiveness of four variation methods that varied only salient parts of the characters. We found that head accessories, top texture and face texture variation are all equally effective at creating variety, whereas facial geometry alterations are less so. Performance implications and guidelines are presented.