Randall W. Hill
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Randall W. Hill.
adaptive agents and multi-agents systems | 2001
Randall W. Hill; Jonathan Gratch; Walter L. Johnson; C. Kyriakakis; Catherine LaBore; Richard Lindheim; Stacy Marsella; David Miraglia; B. Moore; Jacquelyn Ford Morie; Jeff Rickel; Marcus Thiebaux; L. Tuch; R. Whitney; Jay Douglas; William R. Swartout
We describe an initial prototype of a holodeck- like environment that we have created for the Mission Rehearsal Exercise Project. The goal of the project is to create an experience learning system where the participants are immersed in an environment where they can encounter the sights, sounds, and circumstances of real-world scenarios. Virtual humans act as characters and coaches in an interactive story with pedagogical goals.
Distributed Artificial Intelligence | 1989
Les Gasser; Nicholas Rouquette; Randall W. Hill; John Lieb
Abstract This paper reports on the state of our research toward a general coordination mechanism for distributed intelligent systems. In our view, a coordination framework or organization is a particular set of settled and unsettled questions about belief and action that agents have about other agents. Organizational change means opening and/or settling some different set of questions, giving individual agents new problems to solve and, more importantly, different assumptions about the beliefs and actions of other agents. To test these ideas we are developing a testbed called the Intelligent Coordination Experiment (ICE) in which we implement our coordination mechanisms.
adaptive agents and multi-agents systems | 2000
Weixiong Zhang; Randall W. Hill
Situation awareness and assessment are fundamental components of rational virtual humans. They are central parts of models of virtual humans and critical elements of efficient reasoning and planning systems. In this paper, we describe our efforts of developing the capability of situation awareness in autonomous, synthetic virtual pilots in a military domain. After briefly describing the motivations of this research, we present an agent architecture that integrates perception, reasoning with situation awareness, and actions. We describe a representation of situations, methods for situation assessment, and applications of situation awareness in information collection, and creating focused attention. The quantitative measure from our experiment shows that by using focused attention based on the ability of situation awareness, the perceptional load of virtual pilots is significantly reduced, reducing pilots’ response time by more than 50 percent.
intelligent virtual agents | 2005
Youngjun Kim; Randall W. Hill
An important characteristic of a virtual human is the ability to direct its perceptual attention to entities and areas in a virtual environment in a manner that appears believable and serves a functional purpose. In this paper, we describe a perceptual attention model that mediates top-down and bottom-up attention processes of virtual humans in order for the virtual human to efficiently select important information with limited sensory capability within complex virtual environments.
adaptive agents and multi-agents systems | 2002
Randall W. Hill; Youngjun Kim; Jonathan Gratch
This paper describes a method for making short-term predictions about the movement of mobile agents in complex terrain. Virtual humans need this ability in order to shift their visual attention between dynamic objects-predicting where an object will be located a few seconds in the future facilitates the visual reacquisition of the target object. Our method takes into account environmental cues in making predictions and it also indicates how long the prediction is valid, which varies depending on the context. We implemented this prediction technique in a virtual pilot that flies a helicopter in a synthetic environment.
9th Computing in Aerospace Conference | 1993
Kristina Fayyad; Randall W. Hill; E. Wyatt
The usefulness of a knowledge based system is highly dependent upon the implementation of the knowledge base which drives that system. The knowledge acquisition and engineering process is a recognized bottleneck in the development and deployment of knowledge based systems. This paper presents a case study of the knowledge engineering process employed to support the Link Monitor & Control Operator Assistant (LMCOA). The LMCOA is a prototype system which automates the configuration, calibration, test, and operation (referred to as precalibration) of the communications, data processing, metric data, antenna, and other equipment used to support space-ground communications with deep space spacecraft in NASAs Deep Space Network (DSN). The primary knowledge base in the LMCOA is the Temporal Dependency Network (TDN) - a directed graph which provides a procedural representation of the precalibration operation. The TDN incorporates precedence, temporal, and state constraints and uses several supporting knowledge bases and databases. The paper provides a brief background on the DSN, and describes the evolution of the TDN and supporting knowledge bases, the process used for knowledge engineering, and an analysis of the successes -- and problems -- of the knowledge engineering effort.
adaptive agents and multi-agents systems | 2002
Randall W. Hill; Changhee Han; Michael van Lent
Perceptually Driven Cognitive Mapping Of Urban Environments Randall W . Hill, Jr. USC Institute for Creative Technologies 13274 Fiji Way, Suite 600 Marina del Rey, CA 90292-7008 1-310-574-7815 [email protected] Changhee Han USC Institute for Creative Technologies 13274 Fiji Way, Suite 600 Marina del Rey, CA 90292-7008 1-310-574-5700 [email protected] Michael van Lent USC Institute for Creative Technologies 13274 Fiji Way, Suite 600 Marina del Rey, CA 90292-7008 1-310-574-5710 [email protected]
adaptive agents and multi-agents systems | 2005
Youngjun Kim; Randall W. Hill; David R. Traum
In this paper, we present a computational model of dynamic perceptual attention for virtual humans. The computational models of perceptual attention that we surveyed fell into one of two camps: top-down and bottom-up. Biologically inspired computational models [2] typically focus on the bottom-up aspects of attention, while most virtual humans [1,3,7] implement a top-down form of attention. Bottom-up attention models only consider the sensory information without taking into consideration the saliency based on tasks or goals. As a result, the outcome of a purely bottom-up model will not consistently match the behavior of real humans in certain situations. Modeling perceptual attention as a purely top-down process, however, is also not sufficient for implementing a virtual human. A purely top-down model does not take into account the fact that virtual humans need to react to perceptual stimuli vying for attention. Top-down systems typically handle this in an ad hoc manner by encoding special rules to catch certain conditions in the environment. The problem with this approach is that it does not provide a principled way of integrating the ever-present bottom-up perceptual stimuli with top-down control of attention. This model extends the prior model [7] with perceptual resolution based on psychological theories of human perception [4]. This model allows virtual humans to dynamically interact with objects and other individuals, balancing the demands of goal-directed behavior with those of attending to novel stimuli. This model has been implemented and tested with the MRE Project [5].
Archive | 2005
William R. Swartout; Jonathan Gratch; Randall W. Hill; Eduard H. Hovy; Richard Lindheim; Stacy Marsella; Jeff Rickel; David R. Traum
The Institute for Creative Technologies was created at the University of Southern California with the goal of bringing together researchers in simulation technology to collaborate with people from the entertainment industry. The idea was that much more compelling simulations could be developed if researchers who understood state-of-the-art simulation technology worked together with writers and directors who knew how to create compelling stories and characters. This paper presents our first major effort to realize that vision, the Mission Rehearsal Exercise Project, which confronts a soldier trainee with the kinds of dilemmas he might reasonably encounter in a peacekeeping operation. The trainee is immersed in a synthetic world and interacts with virtual humans: artificially intelligent and graphically embodied conversational agents that understand and generate natural language, reason about world events and respond appropriately to the trainees actions or commands. This project is an ambitious exercise in integration, both in the sense of integrating technology with entertainment industry content, but also in that we have also joined a number of component technologies that have not been integrated before. This integration has not only raised new research issues, but it has also suggested some new approaches to difficult problems. In this paper we describe the Mission Rehearsal Exercise system and the insights gained through this large-scale
robot soccer world cup | 1999
Stacy Marsella; Jafar Adibi; Yaser Al-Onaizan; Ali Erdem; Randall W. Hill; Gal A. Kaminka; Zhun Qiu; Milind Tambe
The RoboCup research initiative has established synthetic and robotic soccer as testbeds for pursuing research challenges in Articial Intelligence and robotics. This extended abstract focuses on teamwork and learning, two of the multi- agent research challenges highlighted in RoboCup. To address the challenge of teamwork, we discuss the use of a domain-independent explicit model of team- work, and an explicit representation of team plans and goals. We also discuss the application of agent learning in RoboCup.