Xiaoyuan Tu
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiaoyuan Tu.
international conference on computer graphics and interactive techniques | 1994
Xiaoyuan Tu; Demetri Terzopoulos
This paper proposes a framework for animation that can achieve the intricacy of motion evident in certain natural ecosystems with minimal input from the animator. The realistic appearance, movement, and behavior of individual animals, as well as the patterns of behavior evident in groups of animals fall within the scope of the framework. Our approach to emulating this level of natural complexity is to model each animal holistically as an autonomous agent situated in its physical world. To demonstrate the approach, we develop a physics-based, virtual marine world. The world is inhabited by artificial fishes that can swim hydrodynamically in simulated water through the motor control of internal muscles that motivates fins. Their repertoire of behaviors relies on their perception of the dynamic environment. As in nature, the detailed motions of artificial fishes in their virtual habitat are not entirely predictable because they are not scripted.
international conference on computer graphics and interactive techniques | 1999
John Funge; Xiaoyuan Tu; Demetri Terzopoulos
Recent work in behavioral animation has taken impressive steps toward autonomous, self-animating characters for use in production animation and interactive games. It remains difficult, however, to direct autonomous characters to perform specific tasks. This paper addresses the challenge by introducing cognitive modeling. Cognitive models go beyond behavioral models in that they govern what a character knows, how that knowledge is acquired, and how it can be used to plan actions. To help build cognitive models, we develop the cognitive modeling language CML. Using CML, we can imbue a character with domain knowledge, elegantly specified in terms of actions, their preconditions and their effects, and then direct the character’s behavior in terms of goals. Our approach allows behaviors to be specified more naturally and intuitively, more succinctly and at a much higher level of abstraction than would otherwise be possible. With cognitively empowered characters, the animator need only specify a behavior outline or “sketch plan” and, through reasoning, the character will automatically work out a detailed sequence of actions satisfying the specification. We exploit interval methods to integrate sensing into our underlying theoretical framework, thus enabling our autonomous characters to generate action plans even in highly complex, dynamic virtual worlds. We demonstrate cognitive modeling applications in advanced character animation and automated cinematography.
international conference on computer graphics and interactive techniques | 1997
John Funge; Xiaoyuan Tu
Making Them Behave Cognitive Models for Computer Animation John David Funge Doctor of Philosophy Graduate Department of Computer Science University of Toronto 1998 For applications in computer game development and character animation, recent work in behavioral animation has taken impressive steps toward autonomous, self-animating characters. It remains difficult, however, to direct autonomous characters to perform specific tasks. We propose a new approach to high-level control in which the user gives the character a behavior outline, or “sketch plan”. The behavior outline specification language has syntax deliberately chosen to resemble that of a conventional imperative programming language. In terms of functionality, however, it is a strictsuperset . In particular, a behavior outline need not be deterministic. This added freedom allows many behaviors to be specified more naturally, more simply, more succinctly and at a much higher-level than would otherwise be possible. The character has complete autonomy to decide on how to fill in the necessary missing details. The success of our approach rests heavily on our use of a rigorous logical language, known as the situation calculus . The situation calculus is well-known, simple and intuitive to understand. The basic idea is that a character views its world as a sequence of “snapshots” known as situations. An understanding of how the world can change from one situation to another can then be given to the character by describing what the effect of performing each given action would be. The character can use this knowledge to keep track of its world and to work out which actions to do next in order to attain its goals. The version of the situation calculus we use incorporates a new approach to representing epistemic fluents. The approach is based on interval arithmetic and addresses a number of difficulties in implementing previous approaches.
pacific conference on computer graphics and applications | 1994
Xiaoyuan Tu; Demetri Terzopoulos
The realistic animation of animal behavior by autonomous animate agents requires that the agents able to perceive their virtual worlds. We have created a virtual marine world inhabited by artiicial shes which can swim hydrody-namically in simulated water through the motor control of internal muscles. Artiicial shes exploit a rudimentary model of sh perception. Complex individual and group behaviors, including target tracking, obstacle avoidance, feeding, preying, schooling, and mating, result from the interplay between the internal cognitive state of the artiicial sh and its perception of the external world.
Archive | 1999
Xiaoyuan Tu
To enhance the visual realism of the marine environment, we have created physics-based animate models of seaweeds, plankton, and water currents.
Archive | 1999
Xiaoyuan Tu
To achieve realistic computer animation, our artificial fish model must capture the form and appearance of real fishes with adequate fidelity. In this chapter we design texture mapped, 3D geometric display models with which to “envelope” the biomechanical fish model described in Chapter 4, thus constructing different artificial fishes. We begin with color photographs of real fishes and build free-form geometric models of several different species using nonuniform rational B-spline (nurbs) surfaces. We develop a new interactive tool for segmenting portions of the fish images to be used as texture maps that are subsequently rendered onto the geometric display surfaces. Finally, we describe how this geometric display model is coupled to the dynamic model of the fish to appropriately actuate and deform the display model. We also describe the visualization of the pectoral fin motion in the display model.
Archive | 1999
Xiaoyuan Tu
This chapter discusses the motor system of the artificial fish (see Fig. 4.1). In particular, we describe the physics-based fish model and how it achieves locomotion. The biomechanical model we develop is simple, but it is nonetheless effective for realistically animating fish locomotion. We start by presenting the structure of the dynamic fish model, then introduce the dynamics of this model and the simulated aquatic environment. The numerical method employed for solving the differential equations of motion of the fish model is discussed next. Subsequently we describe the biomechanics-based modeling and control of the fish’s locomotion, paying special attention to the construction of the motor controllers. This includes the abstraction of the muscle movements for useful locomotion and the functional modeling of the pectoral fins.
Archive | 1999
Xiaoyuan Tu
In this chapter we describe the graphical user interface that we have developed for our animation system. The purpose of the user interface is to make it easy for the animator to initialize animations and to control the behaviors of the artificial fishes at the low level, i.e., the physics level, and at the high level, i.e., the motivation level. We implemented the interface using the Forms Library [121].
Archive | 1999
Xiaoyuan Tu
As we discussed in the preceding chapter, there are diverse aspects to the realistic modeling of an artificial animal, from superficial appearance to internal functionality. It is helpful to think of the artificial fish model as consisting of three sub-models: 1. A graphical display model that uses geometry and texture mapping to capture the form and appearance of any specific real fish. 2. A biomechanical model that captures the physical and anatomical structure of the fish’s body, including its muscle actuators, and simulates its deformation and physical dynamics. 3. A brain model that is responsible for motor control, perception control and behavior control of the fish.
Archive | 1999
Xiaoyuan Tu
Action selection in the artificial fish is controlled by its behavior system. The behavior system consists of the habits and mental state of the fish, an intention generator and a set of behavior routines (Fig. 7.1). The behavior system runs continuously within the fish’s simulation loop. At each time stepthe intention generator issues an intention based on the fish’s habits, mental state, and incoming sensory information. It then chooses and executes a behavior routine which in turn selects and runs the appropriate motor controllers. It is important to note that the behavior routines are incremental by design. Their job is to get the artificial fish one stepclo ser to fulfilling its intention during the current time step. Moreover, at any given moment in time, there is only one intention or one active behavior in the artificial fish’s behavior system. This hypothesis is commonly made by ethologists when analyzing the behavior of fishes, birds and four-legged animals of or below intermediate complexity (e.g. dogs, cats) [94, 16].