Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steve DiPaola is active.

Publication


Featured researches published by Steve DiPaola.


Journal of Visualization and Computer Animation | 1991

Extending the range of facial types

Steve DiPaola

We describe, in case study form, techniques to extend the range of facial types and movement using a parametric facial animation system originally developed to model and control synthetic 3D faces limited to a normal range of human shape and motion. These techniques have allowed us to create a single authoring system that can create and animate a wide range of facial types that range from realistic, stylized, cartoon-like, or a combination thereof, all from the same control system. Additionally we describe image processing and 3D deformation tools that allow for a greater range of facial types and facial animation output.


Genetic Programming and Evolvable Machines | 2009

Incorporating characteristics of human creativity into an evolutionary art algorithm

Steve DiPaola; Liane Gabora

A perceived limitation of evolutionary art and design algorithms is that they rely on human intervention; the artist selects the most aesthetically pleasing variants of one generation to produce the next. This paper discusses how computer generated art and design can become more creatively human-like with respect to both process and outcome. As an example of a step in this direction, we present an algorithm that overcomes the above limitation by employing an automatic fitness function. The goal is to evolve abstract portraits of Darwin, using our 2nd generation fitness function which rewards genomes that not just produce a likeness of Darwin but exhibit certain strategies characteristic of human artists. We note that in human creativity, change is less choosing amongst randomly generated variants and more capitalizing on the associative structure of a conceptual network to hone in on a vision. We discuss how to achieve this fluidity algorithmically.


IEEE Transactions on Computational Intelligence and Ai in Games | 2011

A Generic Approach to Challenge Modeling for the Procedural Creation of Video Game Levels

Nathan Sorenson; Philippe Pasquier; Steve DiPaola

This paper presents an approach to automatic video game level design consisting of a computational model of player enjoyment and a generative system based on evolutionary computing. The model estimates the entertainment value of game levels according to the presence of “rhythm groups,” which are defined as alternating periods of high and low challenge. The generative system represents a novel combination of genetic algorithms (GAs) and constraint satisfaction (CS) methods and uses the model as a fitness function for the generation of fun levels for two different games. This top-down approach improves upon typical bottom-up techniques in providing semantically meaningful parameters such as difficulty and player skill, in giving human designers considerable control over the output of the generative system, and in offering the ability to create levels for different types of games.


Computer Animation and Virtual Worlds | 2006

Facial actions as visual cues for personality

Ali Arya; Lisa N. Jefferies; James T. Enns; Steve DiPaola

What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in brief videos. Human viewers rated the personalities of these characters using a well‐standardized adjective rating system borrowed from the psychological literature. These personality descriptors are organized in a multidimensional space that is based on the orthogonal dimensions of desire for affiliation and displays of social dominance. The main result of the personality rating data was that human viewers associated individual facial actions and emotional expressions with specific personality characteristics very reliably. In particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along the affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased the perceived social dominance of the characters. We interpret these results as pointing to a reliable link between animated facial actions/expressions and the personality attributions they evoke in human viewers. The paper shows how these findings are used in our facial animation system to create perceptually valid personality profiles based on dominance and affiliation as two parameters that control the facial actions of autonomous animated characters. Copyright


International Journal of Arts and Technology | 2009

Exploring a parameterised portrait painting space

Steve DiPaola

We overview our interdisciplinary work building parameterised knowledge domains and their authoring tools that allow for expression systems, which move through a space of painterly portraiture. With new computational systems it is possible to conceptually dance, compose and paint in higher-level conceptual spaces. We are interested in building art systems that support exploring these spaces and in particular report on our software-based artistic toolkit and resulting experiments using parameter spaces in face-based new media portraiture. This system allows us to parameterise the open cognitive and vision-based methodology that human artists have intuitively evolved over centuries into a domain toolkit to explore aesthetic realisations and interdisciplinary questions about the act of portrait painting as well as the general creative process. These experiments and questions can be explored by traditional and new media artists, art historians, cognitive scientists and other scholars.


computer games | 2009

Perceptually Valid Facial Expressions for Character-Based Applications

Ali Arya; Steve DiPaola; Avi Parush

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”


Leonardo | 2010

Rembrandt's Textural Agency: A Shared Perspective in Visual Art and Science

Steve DiPaola; Caitlin J. Riebe; James T. Enns

ABSTRACT The authors hypothesize that Rembrandt developed new painterly techniques in order to engage and direct the gaze of the observer. Although these methods were not based on scientific evidence at the time, they are nonetheless consistent with a contemporary understanding of human vision. The authors propose that artists in the late early-modern period developed the technique of textural agencyselective variation in image detailto guide the observers eye and thereby influence the viewing experience. They conclude with the presentation of laboratory evidence that Rembrandts techniques indeed guide the modern viewers eye as proposed.


international conference on computer graphics and interactive techniques | 2006

Emotional remapping of music to facial animation

Steve DiPaola; Ali Arya

We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multi-instrument polyphonic music scores in MIDI format and a remapping rule set.


Eurasip Journal on Image and Video Processing | 2007

Multispace behavioral model for face-based affective social agents

Ali Arya; Steve DiPaola

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.


IEEE Transactions on Multimedia | 2007

Face Modeling and Animation Language for MPEG-4 XMT Framework

Ali Arya; Steve DiPaola

This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems.

Collaboration


Dive into the Steve DiPaola's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James T. Enns

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge