Steven Collins
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven Collins.
ACM Transactions on Graphics | 2008
Ladislav Kavan; Steven Collins; Jiří Žára; Carol O'Sullivan
Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance.
interactive 3d graphics and games | 2007
Ladislav Kavan; Steven Collins; Jiří Žára; Carol O'Sullivan
Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this paper, we present a novel GPU-friendly skinning algorithm based on dual quaternions. We show that this approach solves the artifacts of linear blend skinning at minimal additional cost. Upgrading an existing animation system (e.g., in a videogame) from linear to dual quaternion skinning is very easy and has negligible impact on run-time performance.
international conference on computer graphics and interactive techniques | 2008
Rachel McDonnell; Michéal Larkin; Simon Dobbyn; Steven Collins; Carol O'Sullivan
When simulating large crowds, it is inevitable that the models and motions of many virtual characters will be cloned. However, the perceptual impact of this trade-off has never been studied. In this paper, we consider the ways in which an impression of variety can be created and the perceptual consequences of certain design choices. In a series of experiments designed to test peoples perception of variety in crowds, we found that clones of appearance are far easier to detect than motion clones. Furthermore, we established that cloned models can be masked by color variation, random orientation, and motion. Conversely, the perception of cloned motions remains unaffected by the model on which they are displayed. Other factors that influence the ability to detect clones were examined, such as proximity, model type and characteristic motion. Our results provide novel insights and useful thresholds that will assist in creating more realistic, heterogeneous crowds.
Computer Graphics Forum | 2010
Daniel Sýkora; David Sedlacek; S. Jinchao; John Dingliana; Steven Collins
This paper presents a novel interactive approach for adding depth information into hand‐drawn cartoon images and animations. In comparison to previous depth assignment techniques our solution requires minimal user effort and enables creation of consistent pop‐ups in a matter of seconds. Inspired by perceptual studies we formulate a custom tailored optimization framework that tries to mimic the way that a human reconstructs depth information from a single image. Its key advantage is that it completely avoids inputs requiring knowledge of absolute depth and instead uses a set of sparse depth (in)equalities that are much easier to specify. Since these constraints lead to a solution based on quadratic programming that is time consuming to evaluate we propose a simple approximative algorithm yielding similar results with much lower computational overhead. We demonstrate its usefulness in the context of a cartoon animation production pipeline including applications such as enhancement, registration, composition, 3D modelling and stereoscopic display.
non-photorealistic animation and rendering | 2009
Daniel Sýkora; John Dingliana; Steven Collins
We present a new approach to deformable image registration suitable for articulated images such as hand-drawn cartoon characters and human postures. For such type of data state-of-the-art techniques typically yield undesirable results. We propose a novel geometrically motivated iterative scheme where point movements are decoupled from shape consistency. By combining locally optimal block matching with as-rigid-as-possible shape regularization, our algorithm allows us to register images undergoing large free-form deformations and appearance variations. We demonstrate its practical usability in various challenging tasks performed in the cartoon animation production pipeline including unsupervised inbetweening, example-based shape deformation, auto-painting, editing, and motion retargeting.
Computer Graphics Forum | 2009
Daniel Sýkora; John Dingliana; Steven Collins
In this paper we present LazyBrush, a novel interactive tool for painting hand‐made cartoon drawings and animations. Its key advantage is simplicity and flexibility. As opposed to previous custom tailored approaches [ SBv05 , QWH06 ] LazyBrush does not rely on style specific features such as homogenous regions or pattern continuity yet still offers comparable or even less manual effort for a broad class of drawing styles. In addition to this, it is not sensitive to imprecise placement of color strokes which makes painting less tedious and brings significant time savings in the context cartoon animation. LazyBrush originally stems from requirements analysis carried out with professional ink‐and‐paint illustrators who established a list of useful features for an ideal painting tool. We incorporate this list into an optimization framework leading to a variant of Potts energy with several interesting theoretical properties. We show how to minimize it efficiently and demonstrate its usefulness in various practical scenarios including the ink‐and‐paint production pipeline.
interactive 3d graphics and games | 2008
Ladislav Kavan; Simon Dobbyn; Steven Collins; Jiří Žára; Carol O'Sullivan
Various methods have been proposed to animate and render large crowds of humans in real time for applications such as games and interactive walkthroughs. Recent methods have been developed to render large numbers of pre-computed image-based human representations (Impostors) by exploiting commodity graphics hardware, thus achieving very high frame-rates while maintaining visual fidelity. Unfortunately, these images consume a lot of texture memory, no in-betweening is possible, and the variety of animations that can be shown is severely restricted. This paper proposes an alternative method that significantly improves upon pre-computed impostors: automatically generated 2D polygonal characters (or Polypostors). When compared with image-based crowd rendering systems, Polypostors exhibit a similarly high level of rendering efficiency and visual fidelity, with considerably lower memory requirements (up to a factor of 30 in our test cases). Furthermore, Polypostors enable simple in-betweening and can thus deliver a greater variety of animations at any required level of smoothness with almost no overhead.
Archive | 1995
Steven Collins
We present an extension to existing techniques to provide for more accurate resolution of specular to diffuse transfer within a global illumination framework. In particular this new model is adaptive with a view to capturing high frequency phenomena such as caustic curves in sharp detail and yet allowing for low frequency detail without compromising noise levels and aliasing artefacts. A 2-pass ray-tracing algorithm is used, with an adaptive light-pass followed by a standard eye-pass. During the lightpass, rays are traced from the light sources (essentially sampling the wavefront radiating from the sources), each carrying a fraction of the total power per wavelength of the source. The interactions of these rays with diffuse surfaces are recorded in illumination-maps, as first proposed by Arvo[Ar86]. The key to reconstructing the intensity gradients due to this light-pass lies in the construction of the illumination maps. We record the power carried by the ray as a splat of energy flux, deposited on the surface using a Gaussian distribution kernel. The kernel of the splat is adaptively scaled according to an estimation of the wavefront divergence or convergence, thus resolving sharp intensity gradients in regions of high wavefront convergence and smooth gradients in areas of divergence. The 2nd pass eye-trace modulates the surfaces radiance according to the power stored in the illumination map in order to include the specular to diffuse light modelled during the first pass.
interactive 3d graphics and games | 2009
Ladislav Kavan; Steven Collins; Carol O'Sullivan
Linear blending is a very popular skinning technique for virtual characters, even though it does not always generate realistic deformations. Recently, nonlinear blending techniques (such as dual quaternions) have been proposed in order to improve upon the deformation quality of linear skinning. The trade-off consists of the increased vertex deformation time and the necessity to redesign parts of the 3D engine. In this paper, we demonstrate that any nonlinear skinning technique can be approximated to an arbitrary degree of accuracy by linear skinning, using just a few samples of the nonlinear blending function (virtual bones). We propose an algorithm to compute this linear approximation in an automatic fashion, requiring little or no interaction with the user. This enables us to retain linear skinning at the core of our 3D engine without compromising the visual quality or character setup costs.
symposium on computer animation | 2006
Rachel McDonnell; Simon Dobbyn; Steven Collins; Carol O'Sullivan
Recent developments in crowd simulation have allowed thousands of characters to be rendered in real-time. Usually this is achieved through the use of Level of Detail (LOD) models for the individuals in the crowd. Perceptual studies have shown that image-based representations, i.e., impostors, can be used as imperceptible background substitutes for high-polygon models for skinned human characters, resulting in optimal rendering times and high visual fidelity. However, previous methods only showed humans dressed in clothes that were deformed using standard skinning methods. Highly deformable objects like cloth are not effectively depicted using these methods. Therefore, in this paper, we present the first perceptual evaluation of different LOD representations of humans wearing deformable (i.e., physically simulated) clothing. We show conclusively that impostors are startlingly effective at depicting the deformation properties of clothing and present useful guidelines for the development of crowd systems with thousands of realistically clothed humans.