Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norman I. Badler is active.

Publication


Featured researches published by Norman I. Badler.


international conference on computer graphics and interactive techniques | 1994

Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents

Justine Cassell; Catherine Pelachaud; Norman I. Badler; Mark Steedman; Brett Achorn; Tripp Becket; Brett Douville; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2000

Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs

Deepak Tolani; Ambarish Goswami; Norman I. Badler

In this paper we develop a set of inverse kinematics algorithms suitable for an anthropomorphic arm or leg. We use a combination of analytical and numerical methods to solve generalized inverse kinematics problems including position, orientation, and aiming constraints. Our combination of analytical and numerical methods results in faster and more reliable algorithms than conventional inverse Jacobian and optimization-based techniques. Additionally, unlike conventional numerical algorithms, our methods allow the user to interactively explore all possible solutions using an intuitive set of parameters that define the redundancy of the system.


international conference on computer graphics and interactive techniques | 1981

Animating facial expressions

Stephen Platt; Norman I. Badler

Recognition and simulation of actions performable on rigidly-jointed actors such as human bodies have been the subject of our research for some time. One part of an ongoing effort towards a total human movement simulator is to develop a system to perform the actions of American Sign Language (ASL). However, one of the “channels” of ASL communication, the face, presents problems which are not well handled by a rigid model. An integrated system for an internal representation and simulation of the face is presented, along with a proposed image analysis model. Results from an implementation of the internal model and simulation modules are presented, as well as comments on the future of computer controlled recognition of facial actions. We conclude with a discussion on extensions of the system, covering relations between flexible masses and rigid (jointed) ones. Applications of this theory into constrained actions, such as across rigid nonmoving sheets of bone (forehead, eyes) are also discussed.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1980

Model-based image analysis of human motion using constraint propagation

Joseph O'Rourke; Norman I. Badler

A system capable of analyzing image sequences of human motion is described. The system is structured as a feedback loop between high and low levels: predictions are made at the semantic level and verifications are sought at the image level. The domain of human motion lends itself to a model-driven analysis, and the system includes a detailed model of the human body. All information extracted from the image is interpreted through a constraint network based on the structure of the human model. A constraint propagation operator is defined and its theoretical properties outlined. An implementation of this operator is described, and results of the analysis system for short image sequences are presented.


international conference on computer graphics and interactive techniques | 2000

The EMOTE model for effort and shape

Diane M. Chi; Monica Costa; Liwei Zhao; Norman I. Badler

Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate that the domain of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components, provides us with valuable parameters for the form and execution of qualitative aspects of movements. Inspired by some tenets shared among LMA proponents, we also point out that Effort and Shape phrasing across movements and the engagement of the whole body are essential aspects to be considered in the search for naturalness in procedurally generated gestures. Finally, we present EMOTE (Expressive MOTion Engine), a 3D character animation system that applies Effort and Shape qualities to independently defined underlying movements and thereby generates more natural synthetic gestures.


IEEE Intelligent Systems | 2002

Creating interactive virtual humans: some assembly required

Jonathan Gratch; Jeff Rickel; Elisabeth André; Justine Cassell; Eric Petajan; Norman I. Badler

Discusses some of the key issues that must be addressed in creating virtual humans, or androids. As a first step, we overview the issues and available tools in three key areas of virtual human research: face-to-face conversation, emotions and personality, and human figure animation. Assembling a virtual human is still a daunting task, but the building blocks are getting bigger and better every day.


ACM Transactions on Graphics | 1994

Inverse kinematics positioning using nonlinear programming for highly articulated figures

Jianmin Zhao; Norman I. Badler

An articulated figure is often modeled as a set of rigid segments connected with joints. Its configuration can be altered by varying the joint angles. Although it is straight forward to compute figure configurations given joint angles (forward kinematics), it is more difficult to find the joint angles for a desired configuration (inverse kinematics). Since the inverse kinematics problem is of special importance to an animator wishing to set a figure to a posture satisfying a set of positioning constraints, researchers have proposed several different approaches. However, when we try to follow these approaches in an interactive animation system where the object on which to operate is as highly articulated as a realistic human figure, they fail in either generality or performance. So, we approach this problem through nonlinear programming techniques. It has been successfully used since 1988 in the spatial constraint system within Jack, a human figure simulation system developed at the University of Pennsylvania, and proves to be satisfactorily efficient, controllable, and robust. A spatial constraint in our system involves two parts: one constraint on the figure, the end-effector, and one on the spatial environment, the goal. These two parts are dealt with separately, so that we can achieve a neat modular implementation. Constraints can be added one at a time with appropriate weights designating the importance of this constraint relative to the others and are always solved as a group. If physical limits prevent satisfaction of all the constraints, the system stops with the (possibly local) optimal solution for the given weights. Also, the rigidity of each joint angle can be controlled, which is useful for redundant degrees of freedom.


Cognitive Science | 1996

Generating facial expressions for speech

Catherine Pelachaud; Norman I. Badler; Mark Steedman

This article reports results from o program thot produces high-quolity onimotion of fociol expressions ond head movements OS outomoticolly OS possible in conjunction with meaning-based speech synthesis, including spoken intonation. The gool of the research is OS much to test and define our theories of the formal semantics for such gestures, OS to produce convincing onimotion. Towards this end, we hove produced o high-level progromming longuoge for three-dimensional (3-D) onimotion of fociol expressions. We have been concerned primorily with expressions conveying information correlated with the intonotion of the voice: This includes the differences of timing, pitch, and emphosis that ore reloted to such semantic distinctions of discourse OS “focus,” “topic,” and “comment, ” “theme” ond “rheme,” or “given” ond “new” informotion. We ore also interested in the relotion of affect or emotion to fociol expression. Until now, systems hove not embodied such rule-governed tronslotion from spoken utterance meaning to fociol expressions. Our system embodies rules that describe and coordinate these relations: intonotion/informofion, intonofion/offect, ond fociol expressions/affect. A meoning representation includes discourse information: What is controstive/background informotion in the given context, ond whot is the “topic” or “theme” of the discourse? The system mops the meaning representotion into how accents ond their placement ore chosen, how they ore conveyed over fociol expression, ond how speech ond fociol expressions ore coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and monipulotors. Our algorithms then impose synchrony, create coorticulotion effects, and determine offectuol signals, eye ond heod movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other fociol models. We would like to thank Steve Platt for his facial model and for very useful comments. We would like to thank Soetjianto and Khairol Yussof who have improved the facial model. We are also very grateful to Jean Griffin, Francisco Azuola, and Mike Edwards who developed part of the animation software. All the work related to the voice synthesizer, speech, and intonation was done by Scott Prevost. We are very grateful to him. Finally, we would like to thank all the members of the graphics laboratory, especially Cary Phillips and Jianmin Zhao, for their helpful comments.


international conference on computer graphics and interactive techniques | 2002

Eyes alive

Sooha Park Lee; Jeremy B. Badler; Norman I. Badler

For an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have implemented an eye movement model based on empirical models of saccades and statistical models of eye-tracking data. Face animations using stationary eyes, eyes with random saccades only, and eyes with statistically derived saccades are compared, to evaluate whether they appear natural and effective while communicating.


IEEE Computer Graphics and Applications | 2006

Modeling Crowd and Trained Leader Behavior during Building Evacuation

Nuria Pelechano; Norman I. Badler

This article considers animating evacuation in complex buildings by crowds who might not know the structures connectivity, or who find routes accidentally blocked. It takes into account simulated crowd behavior under two conditions: where agents communicate building route knowledge, and where agents take different roles such as trained personnel, leaders, and followers

Collaboration


Dive into the Norman I. Badler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Pelachaud

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nuria Pelechano

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John P. Granieri

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Liwei Zhao

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge