Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean Oh is active.

Publication


Featured researches published by Jean Oh.


Ai Magazine | 2002

Electric Elves: Agent Technology for Supporting Human Organizations

Hans Chalupsky; Yolanda Gil; Craig A. Knoblock; Kristina Lerman; Jean Oh; David V. Pynadath; Thomas A. Russ; Milind Tambe

The operation of a human organization requires dozens of everyday tasks to ensure coherence in organizational activities, to monitor the status of such activities, to gather information relevant to the organization, to keep everyone in the organization informed, etc. Teams of software agents can aid humans in accomplishing these tasks, facilitating the organization’s coherent functioning and rapid response to crises, while reducing the burden on humans. Based on this vision, this paper reports on Electric Elves, a system that has been operational, 24/7, at our research institute since June 1, 2000. Tied to individual user workstations, fax machines, voice, mobile devices such as cell phones and palm pilots, Electric Elves has assisted us in routine tasks, such as rescheduling meetings, selecting presenters for research meetings, tracking people’s locations, organizing lunch meetings, etc. We discuss the underlying AI technologies that led to the success of Electric Elves, including technologies devoted to agenthuman interactions, agent coordination, accessing multiple heterogeneous information sources, dynamic assignment of organizational tasks, and deriving information about organization members. We also report the results of deploying Electric Elves in our own research organization.


international world wide web conferences | 2001

Mixed-initiative, multi-source information assistants

Craig A. Knoblock; Steven Minton; José Luis Ambite; Maria Muslea; Jean Oh; Martin R. Frank

While the information resources on the Web are vast, the sources are often hard to find, painful to use, and difficult to integrate. We have developed the Heracles framework for building Web-based information assistants. This framework provides the infrastructure to rapidly construct new applications that extract information from multiple Web sources and interactively integrate the data using a dynamic, hierarchical constraint network. This paper describes the core technologies that comprise the framework, including information extraction, hierarchical template representation, and constraint propagation. In addition, we present an application of this framework, the Travel Assistant, which is an interactive travel planning system. We also briefly describe our experience using the same framework to build a second application, the WorldInfo Assistant, which extracts and integrates geographic-related data about countries thorughout the world. We believe these types of information assistants provide a significant step forward in fully exploiting the information available on the Internet.


international symposium on experimental robotics | 2016

Inferring Maps and Behaviors from Natural Language Instructions

Felix Duvallet; Matthew R. Walter; Thomas M. Howard; Sachithra Hemachandra; Jean Oh; Seth J. Teller; Nicholas Roy; Anthony Stentz

Natural language provides a flexible, intuitive way for people to command robots, which is becoming increasingly important as robots transition to working alongside people in our homes and workplaces. To follow instructions in unknown environments, robots will be expected to reason about parts of the environments that were described in the instruction, but that the robot has no direct knowledge about. However, most existing approaches to natural language understanding require that the robot’s environment be known a priori. This paper proposes a probabilistic framework that enables robots to follow commands given in natural language, without any prior knowledge of the environment. The novelty lies in exploiting environment information implicit in the instruction, thereby treating language as a type of sensor that is used to formulate a prior distribution over the unknown parts of the environment. The algorithm then uses this learned distribution to infer a sequence of actions that are most consistent with the command, updating our belief as we gather more metric information. We evaluate our approach through simulation as well as experiments on two mobile robots; our results demonstrate the algorithm’s ability to follow navigation commands with performance comparable to that of a fully-known environment.


national conference on artificial intelligence | 2004

CMRadar: a personal assistant agent for calendar management

Pragnesh Jay Modi; Manuela M. Veloso; Stephen F. Smith; Jean Oh

Personal assistant agents have long promised to automate routine everyday tasks in order to reduce the cognitive load on humans. One such routine task is the management of a users calendar. In this paper, we describe CMRadar, a calendar management system that is a significant step towards achieving the enduring vision of assistant agents. CMRadar is an implemented system with wide-ranging capabilities for supporting email exchange, multiagent negotiations and schedule optimization based on user preferences. The motivation is to develop an end-to-end system for use by real users to obtain data to facilitate learning. Having now completed an initial prototype which we believe is the first end-to-end agent for calendar management, we present as contributions our architecture design, the communication language used to tie system components together, and initial simulation experiments that isolate negotiation cost a key factor to be logged and predicted in order to improve performance.


Proceedings of SPIE | 2012

The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates

Scott Ososky; David Schuster; Florian Jentsch; Stephen M. Fiore; Randall Shumaker; Christian Lebiere; Unmesh Kurup; Jean Oh; Anthony Stentz

Current ground robots are largely employed via tele-operation and provide their operators with useful tools to extend reach, improve sensing, and avoid dangers. To move from robots that are useful as tools to truly synergistic human-robot teaming, however, will require not only greater technical capabilities among robots, but also a better understanding of the ways in which the principles of teamwork can be applied from exclusively human teams to mixed teams of humans and robots. In this respect, a core characteristic that enables successful human teams to coordinate shared tasks is their ability to create, maintain, and act on a shared understanding of the world and the roles of the team and its members in it. The team performance literature clearly points towards two important cornerstones for shared understanding of team members: mental models and situation awareness. These constructs have been investigated as products of teams as well; amongst teams, they are shared mental models and shared situation awareness. Consequently, we are studying how these two constructs can be measured and instantiated in human-robot teams. In this paper, we report results from three related efforts that are investigating process and performance outcomes for human robot teams. Our investigations include: (a) how human mental models of tasks and teams change whether a teammate is human, a service animal, or an advanced automated system; (b) how computer modeling can lead to mental models being instantiated and used in robots; (c) how we can simulate the interactions between human and future robotic teammates on the basis of changes in shared mental models and situation assessment.


systems, man and cybernetics | 2006

Scheduling with Uncertain Resources: Search for a Near-Optimal Solution

Eugene Fink; P. Matthew Jennings; Ulas Bardak; Jean Oh; Stephen F. Smith; Jaime G. Carbonell

We describe a system for scheduling a conference based on incomplete information about available resources and scheduling constraints. We explain the representation of uncertain knowledge, describe a local-search algorithm for generating near-optimal schedules, and give empirical results of automated scheduling under uncertainty.


PATAT'04 Proceedings of the 5th international conference on Practice and Theory of Automated Timetabling | 2004

Learning user preferences in distributed calendar scheduling

Jean Oh; Stephen F. Smith

Within the field of software agents, there has been increasing interest in automating the process of calendar scheduling in recent years. Calendar (or meeting) scheduling is an example of a timetabling domain that is most naturally formulated and solved as a continuous, distributed problem. Fundamentally, it involves reconciliation of a given users scheduling preferences with those of others that the user needs to meet with, and hence techniques for eliciting and reasoning about a users preferences are crucial to finding good solutions. In this paper, we present work aimed at learning a users time preference for scheduling a meeting. We adopt a passive machine learning approach that observes the user engaging in a series of meeting scheduling episodes with other meeting participants and infers the users true preference model from accumulated data. After describing our basic modeling assumptions and approach to learning user preferences, we report the results obtained in an initial set of proof of principle experiments. In these experiments, we use a set of automated CMRADAR calendar scheduling agents to simulate meeting scheduling among a set of users, and use information generated during these interactions as training data for each users learner. The learned model of a given user is then evaluated with respect to how well it satisfies that users true preference model on a separate set of meeting scheduling tasks. The results show that each learned model is statistically indistinguishable from the true model in their performance with strong confidence, and that the learned model is also significantly better than a random choice model.


international joint conference on artificial intelligence | 2011

An agent architecture for prognostic reasoning assistance

Jean Oh; Felipe Meneguzzi; Katia P. Sycara; Timothy J. Norman

In this paper we describe a software assistant agent that can proactively assist human users situated in a time-constrained environment to perform normative reasoning-reasoning about prohibitions and obligations-so that the user can focus on her planning objectives. In order to provide proactive assistance, the agent must be able to 1) recognize the users planned activities, 2) reason about potential needs of assistance associated with those predicted activities, and 3) plan to provide appropriate assistance suitable for newly identified user needs. To address these specific requirements, we develop an agent architecture that integrates user intention recognition, normative reasoning over a users intention, and planning, execution and replanning for assistive actions. This paper presents the agent architecture and discusses practical applications of this approach.


international conference on robotics and automation | 2015

Grounding spatial relations for outdoor robot navigation

Abdeslam Boularias; Felix Duvallet; Jean Oh; Anthony Stentz

We propose a language-driven navigation approach for commanding mobile robots in outdoor environments. We consider unknown environments that contain previously unseen objects. The proposed approach aims at making interactions in human-robot teams natural. Robots receive from human teammates commands in natural language, such as “Navigate around the building to the car left of the fire hydrant and near the tree”. A robot needs first to classify its surrounding objects into categories, using images obtained from its sensors. The result of this classification is a map of the environment, where each object is given a list of semantic labels, such as “tree” and “car”, with varying degrees of confidence. Then, the robot needs to ground the nouns in the command. Grounding, the main focus of this paper, is mapping each noun in the command into a physical object in the environment. We use a probabilistic model for interpreting the spatial relations, such as “left of” and “near”. The model is learned from examples provided by humans. For each noun in the command, a distribution on the objects in the environment is computed by combining spatial constraints with a prior given as the semantic classifiers confidence values. The robot needs also to ground the navigation mode specified in the command, such as “navigate quickly” and “navigate covertly”, as a cost map. The cost map is also learned from examples, using Inverse Optimal Control (IOC). The cost map and the grounded goal are used to generate a path for the robot. This approach is evaluated on a robot in a real-world environment. Our experiments clearly show that the proposed approach is efficient for commanding outdoor robots.


Engineering Applications of Artificial Intelligence | 2013

Prognostic normative reasoning

Jean Oh; Felipe Meneguzzi; Katia P. Sycara; Timothy J. Norman

Human users planning for multiple objectives in complex environments are subjected to high levels of cognitive workload, which can severely impair the quality of the plans created. This paper describes a software agent that can proactively assist cognitively overloaded users by providing normative reasoning about prohibitions and obligations so that the user can focus on her primary objectives. In order to provide proactive assistance, we develop the notion of prognostic normative reasoning (PNR) that consists of the following steps: (1) recognizing the users planned activities, (2) reasoning about norms to evaluate those predicted activities, and (3) providing necessary assistance so that the users activities are consistent with norms. The idea of PNR integrates various AI techniques, namely, user intention recognition, normative reasoning over a users intention, and planning, execution and replanning for assistive actions. In this paper, we describe an agent architecture for PNR and discuss practical applications.

Collaboration


Dive into the Jean Oh's collaboration.

Top Co-Authors

Avatar

Katia P. Sycara

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Felipe Meneguzzi

Pontifícia Universidade Católica do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stephen F. Smith

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig A. Knoblock

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Arne Suppé

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chip Diberardino

General Dynamics Land Systems

View shared research outputs
Top Co-Authors

Avatar

David V. Pynadath

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Felix Duvallet

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge