Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William H. Bares is active.

Publication


Featured researches published by William H. Bares.


acm multimedia | 2000

Virtual 3D camera composition from frame constraints

William H. Bares; Scott McDermott; Christina Boudreaux; Somying Thainimit

We have designed a graphical interface that enables 3D visual artists or developers of interactive 3D virtual environments to efficiently define sophisticated camera compositions by creating storyboard frames, indicating how a desired shot should appear. These storyboard frames are then automatically encoded into an extensive set of virtual camera constraints that capture the key visual composition elements of the storyboard frame. Visual composition elements include the size and position of a subject in a camera shot. A recursive heuristic constraint solver then searches the space of a given 3D virtual environment to determine camera parameter values which produce a shot closely matching the one in the given storyboard frame. The search method uses given ranges of allowable parameter values expressed by each constraint to reduce the size of the 7 Degree of Freedom search space of possible camera positions, aim direction vectors, and field of view angles. In contrast, some existing methods of automatically positioning cameras in 3D virtual environments rely on pre-defined camera placements that cannot account for unanticipated configurations and movement of objects or use program-like scripts to define constraint-based camera shots. For example, it is more intuitive to directly manipulate an objects size in the frame rather than editing a constraint script to specify that the object should cover 10% of the frames area.


intelligent user interfaces | 1998

Intelligent multi-shot visualization interfaces for dynamic 3D worlds

William H. Bares; James C. Lester

In next-generation virtual 3D simulation, training, and entertainment environments, intelligent visualization interfaces must respond to user-specified viewing requests so users can follow salient points of the action and monitor the relative locations of objects. Users should be able to indicate which object(s) to view, how each should be viewed, cinematic style and pace, and how to respond when a single satisfactory view is not possible. When constraints fail, weak constraints can be relaxed or multi-shot solutions can be displayed in sequence or as composite shots with simultaneous viewports. To address these issues, we have developed CONSTRAINTCAM, a real-time camera visualization interface for dynamic 3D worlds. It has been studied in an interactive testbed in which users can issue viewing goals to monitor multiple autonomous characters navigating through a virtual cityscape. CONSTRAINTCAM’s real-time performance in this testbed is encouraging.


UM | 1997

Cinematographic User Models for Automated Realtime Camera Control in Dynamic 3D Environments

William H. Bares; James C. Lester

Advances in 3D graphics technology have accelerated the construction of dynamic 3D environments. Despite their promise for scientific and educational applications, much of this potential has gone unrealized because runtime camera control software lacks user-sensitivity. Current environments rely on sequences of viewpoints that directly require the user’s control or are based primarily on actions and geometry of the scene. Because of the complexity of rapidly changing environments, users typically cannot manipulate objects in environments while simultaneously issuing camera control commands. To address these issues, we have developed UCam, a realtime camera planner that employs cinematographic user models to render customized visualizations of dynamic 3D environments. After interviewing users to determine their preferred directorial style and pacing, UCam examines the resulting cinematographic user model to plan camera sequences whose shot vantage points and cutting rates are tailored to the user in realtime. Evaluations of UCam in a dynamic 3D testbed are encouraging.


adaptive agents and multi-agents systems | 1999

Explanatory lifelike avatars: performing user-centered tasks in 3D learning environments

James C. Lester; Luke Zettlemoyer; Joël P. Grégoire; William H. Bares

Because of their multimodal communicative abilities and strong visual presence, animated pedagogical agents offer significant promise for 3D learning environments. We describe a new class of animated pedagogical agents, explanatory lifelike avatars, which can perform user-designed tasks in rich 3D worlds. By generating task networks to perform student-designed tasks, an avatar task planner constructs and interprets action specifications that it then interprets within the geometries of the 3D environment to generate navigational, manipulative, and verbal behaviors. Filmed by a narrative camera planner in the 3D world, the avatars perform students’ tasks and accompanies them with running verbal explanations in realtime. The explanatory lifelike avatar framework has been implemented in a full-scale avatar for the CPU CITY learning environment, a 3D learning environment for the domain of computer architecture and systems for novices. To investigate the effectiveness of this approach, a novel four-way comparative usability study was conducted with an “agentless” world, a disembodied narrator, a mute lifelike avatar, and a fullscale explanatory avatar. Results of the study suggest that explanatory lifelike avatars hold much promise for learning environments.


acm multimedia | 2011

The director's lens: an intelligent assistant for virtual cinematography

Christophe Lino; Marc Christie; Roberto Ranon; William H. Bares

We present the Directors Lens, an intelligent interactive assistant for crafting virtual cinematography using a motion-tracked hand-held device that can be aimed like a real camera. The system employs an intelligent cinematography engine that can compute, at the request of the filmmaker, a set of suitable camera placements for starting a shot. These suggestions represent semantically and cinematically distinct choices for visualizing the current narrative. In computing suggestions, the system considers established cinema conventions of continuity and composition along with the filmmakers previous selected suggestions, and also his or her manually crafted camera compositions, by a machine learning component that adapts shot editing preferences from user-created camera edits. The result is a novel workflow based on interactive collaboration of human creativity with automated intelligence that enables efficient exploration of a wide range of cinematographic possibilities, and rapid production of computer-generated animated movies.


intelligent tutoring systems | 1998

Habitable 3D Learning Environments for Situated Learning

William H. Bares; Luke Zettlemoyer; James C. Lester

The growing emphasis on learner-centered education focuses on intrinsically motivated learning via engaging problem-solving activities. Habitable 3D learning environments, in which learners guide avatars through virtual worlds for role-based problem solving, hold great promise for situated learning. We have been investigating habitable learning environments by iteratively designing, implementing, and evaluating them. In the Situated Avatar-Based Immersive Learning (SAIL) framework for habitable 3D learning environments, learners navigate avatars through virtual worlds as they solve problems by manipulating artifacts. The SAIL framework has been used to implement CPU CITY, a 3D learning environment testbed for the domain of computer architecture. A visually compelling virtual cityscape of computer components, CPU CITY presents learners with goal advertisements that focus their attention on salient problem-solving sub-tasks. The CPU CITY testbed has produced prototypes that have been evaluated. Pilot studies suggest that habitable learning environments offer a promising new paradigm for educational applications.


intelligent user interfaces | 1998

Task-sensitive cinematography interfaces for interactive 3D learning environments

William H. Bares; Luke Zettlemoyer; Dennis W. Rodriguez; James C. Lester

Interactive 3D learning environments can provide rich problemsolving experiences with unparalleled visual impact. In these environments, students interactively solve problems by directing their avatars to navigate through complex worlds, transport entities from one location to another, and manipulate devices. However, realtime camera control is critical to their successful deployment. To create effective learning experiences, a virtual camera must in realtime “film” their activities in a manner that most clearly depicts the salient aspects of the tasks students are performing. To address this problem, we have developed the cinematic task modeling framework for automated realtime task-sensitive camera control in 3D environments. Cinematic task models dynamically map the intentional structure of users’ activities to visual structures that continuously depict the most relevant actions and objects in the environment. By exploiting cinematic task models, a cinematography interface to 3D learning environments can dynamically plan camera positions, view directions, and camera movements that help users perform their tasks. To investigate the effect of the cinematic task modeling framework on student-environment interactions, we have constructed a fullscale cinematography interface and a 3D learning environment testbed. Focus group studies suggest that task-sensitive camera planning significantly improves students’ interactions with complex 3D learning environments.


smart graphics | 2006

A Photographic Composition Assistant for Intelligent Virtual 3D Camera Systems

William H. Bares

A human photographer can frame an image and enhance its composition by visualizing how elements in the frame could be better sized or positioned. The photographer resizes elements in the frame by changing the zoom lens or by varying his or her distance to the subject. The photographer moves elements by panning. An intelligent virtual photographer can apply a similar process. Given an initial 3D camera view, a user or application specifies high-level composition goals such as Rule of Thirds or balance. Each objective defines either a One-D interval for image scaling or a Two-D interval for translation. Two-D projections of objects are translated and scaled in the frame according to computed optima. These Two-D scales and translates are mapped to matching changes in the 3D field of view (zoom), dolly-in or out varying subject distance, and rotating the aim direction to improve the composition.


intelligent user interfaces | 2001

Generating virtual camera compositions

William H. Bares; Byungwoo Kim

This paper describes work in progress to automatically generate camera shots featuring the composition techniques of expert photographers. This effort builds upon an automated camera planner that computes a shot satisfying a given set of constraints. In this prior work, users manually specify the set of constraints and the numeric parameters for each. A typical two subject shot can be described by eleven constraints involving sixty numeric parameters. This work aims to develop a high-level interface to automatically generate such constraint sets. Photographic composition techniques such as composition in depth can be realized by automatically constructing appropriate sets of camera constraints then submitting them to a constraint solver.


intelligent user interfaces | 2002

Storyboard frame editing for cinematic composition

Scott McDermott; Junwei Li; William H. Bares

We are developing an intelligent virtual cinematography interface that can be used to compose sequences of shots and automatically evaluate individual shots and transitions, reporting possible deviations from widely accepted cinematic composition guidelines. Authors compose shots using a Storyboard Frame Editor to place subject objects as they should appear in the frame. Then the Storyboard Sequencer is used to design shot-to-shot transitions.

Collaboration


Dive into the William H. Bares's collaboration.

Top Co-Authors

Avatar

James C. Lester

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joël P. Grégoire

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Scott McDermott

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Somying Thainimit

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amav Jhala

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge