Melinda T. Gervasio
SRI International
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Melinda T. Gervasio.
Ai Magazine | 2007
Karen L. Myers; Pauline M. Berry; Jim Blythe; Ken Conley; Melinda T. Gervasio; Deborah L. McGuinness; David N. Morley; Avi Pfeffer; Martha E. Pollack; Milind Tambe
We describe an intelligent personal assistant that has been developed to aid a busy knowledge worker in managing time commitments and performing tasks. The design of the system was motivated by the complementary objectives of (1) relieving the user of routine tasks, thus allowing her to focus on tasks that critically require human problem-solving skills, and (2) intervening in situations where cognitive overload leads to oversights or mistakes by the user. The system draws on a diverse set of AI technologies that are linked within a Belief-Desire-Intention (BDI) agent system. Although the system provides a number of automated functions, the overall framework is highly user centric in its support for human needs, responsiveness to human inputs, and adaptivity to user working style and preferences.
intelligent user interfaces | 2005
Melinda T. Gervasio; Michael D. Moffitt; Martha E. Pollack; Joseph M. Taylor; Tomás E. Uribe
We present PLIANT, a learning system that supports adaptive assistance in an open calendaring system. PLIANT learns user preferences from the feedback that naturally occurs during interactive scheduling. It contributes a novel application of active learning in a domain where the choice of candidate schedules to present to the user must balance usefulness to the learning module with immediate benefit to the user. Our experimental results provide evidence of PLIANTs ability to learn user preferences under various conditions and reveal the tradeoffs made by the different active learning selection strategies.
ACM Transactions on Intelligent Systems and Technology | 2011
Pauline M. Berry; Melinda T. Gervasio; Bart Peintner; Neil Yorke-Smith
In a world of electronic calendars, the prospect of intelligent, personalized time management assistance seems a plausible and desirable application of AI. PTIME (Personalized Time Management) is a learning cognitive assistant agent that helps users handle email meeting requests, reserve venues, and schedule events. PTIME is designed to unobtrusively learn scheduling preferences, adapting to its user over time. The agent allows its user to flexibly express requirements for new meetings, as they would to an assistant. It interfaces with commercial enterprise calendaring platforms, and it operates seamlessly with users who do not have PTIME. This article overviews the system design and describes the models and technical advances required to satisfy the competing needs of preference modeling and elicitation, constraint reasoning, and machine learning. We further report on a multifaceted evaluation of the perceived usefulness of the system.
adaptive agents and multi-agents systems | 2006
Pauline M. Berry; Bart Peintner; Ken Conley; Melinda T. Gervasio; Tomás E. Uribe; Neil Yorke-Smith
We report on our ongoing practical experience in designing, implementing, and deploying PTIME, a personalized agent for time management and meeting scheduling in an open, multi-agent environment. In developing PTIME as part of a larger assistive agent called CALO, we have faced numerous challenges, including usability, multi-agent coordination, scalable constraint reasoning, robust execution, and unobtrusive learning. Our research advances basic solutions to the fundamental problems; however, integrating PTIME into a deployed system has raised other important issues for the successful adoption of new technology. As a personal assistant, PTIME must integrate easily into a users real environment, support her normal workflow, respect her authority and privacy, provide natural user interfaces, and handle the issues that arise with deploying such a system in an open environment.
intelligent user interfaces | 2003
Jungsoon P. Yoo; Melinda T. Gervasio; Pat Langley
The Stock Tracker is an adaptive recommendation system for trading stocks that automatically acquires content-based models of user preferences to tailor its buy and sell advice. The system incorporates an efficient algorithm that exploits the fixed structure of user models and relies on unobtrusive data-gathering techniques. In this paper, we describe our approach to personalized recommendation and its implementation in this domain. We also discuss experiments that evaluate the systems behavior on both human subjects and synthetic users. The results suggest that the Stock Tracker can rapidly adapt its advice to different types of users
intelligent user interfaces | 2009
Melinda T. Gervasio; Janet Murdock
Recent years have seen a resurgence of interest in programming by demonstration. As end users have become increasingly sophisticated, computer and artificial intelligence technology has also matured, making it feasible for end users to teach long, complex procedures. This paper addresses the problem of learning from demonstrations involving unobservable (e.g., mental) actions. We explore the use of knowledge base inference to complete missing dataflow and investigate the approach in the context of the CALO cognitive personal desktop assistant. We experiment with the Pathfinder utility, which efficiently finds all the relationships between any two objects in the CALO knowledge base. Pathfinder often returns too many paths to present to the user and its default shortest path heuristic sometimes fails to identify the correct path. We develop a set of filtering techniques for narrowing down the results returned by Pathfinder and present experimental results showing that these techniques effectively reduce the alternative paths to a small, meaningful set suitable for presentation to a user.
intelligent user interfaces | 1998
Wayne Iba; Melinda T. Gervasio
The domain of crisis planning and scheduling taxes human response managers due to high levels of urgency and uncertainty. Such applications require assistant technologies (in contrast to automation technologies) and provide special challenges for interface design. We present INCA, the INteractive Crisis Assistant, that helps users develop effective crisis response plans and schedules in a timely manner. INCA also adapts to the individual users by anticipating their preferred responses to a given crisis and their intended repairs to a candidate response. We evaluate our system in HAZMAT, a synthetic domain involving hazardous material incidents. The results show that INCA provides effective support for the timely generation of effective responses and tailors itself to individual users.
intelligent user interfaces | 2011
Melinda T. Gervasio; Eric Yeh; Karen L. Myers
Intelligent systems require substantial bodies of problem-solving knowledge. Machine learning techniques hold much appeal for acquiring such knowledge but typically require extensive amounts of user-supplied training data. Alternatively, informed question asking can supplement machine learning by directly eliciting critical knowledge from a user. Question asking can reduce the amount of training data required, and hence the burden on the user; furthermore, focused question asking holds significant promise for faster and more accurate acquisition of knowledge. In previous work, we developed static strategies for question asking that provide background knowledge for a base learner, enabling the learner to make useful generalizations even with few training examples. Here, we extend that work with a learning approach for automatically acquiring question-asking strategies that better accommodate the interdependent nature of questions. We present experiments validating the approach and showing its usefulness for acquiring efficient, context-dependent question-asking strategies.
Knowledge and Information Systems | 2017
Pauline M. Berry; Thierry Donneau-Golencer; Khang Duong; Melinda T. Gervasio; Bart Peintner; Neil Yorke-Smith
This article examines experiences in evaluating a user-adaptive personal assistant agent designed to assist a busy knowledge worker in time management. We examine the managerial and technical challenges of designing adequate evaluation and the tension of collecting adequate data without a fully functional, deployed system. The CALO project was a seminal multi-institution effort to develop a personalized cognitive assistant. It included a significant attempt to rigorously quantify learning capability, which this article discusses for the first time, and ultimately the project led to multiple spin-outs including Siri. Retrospection on negative and positive experiences over the 6 years of the project underscores best practice in evaluating user-adaptive systems. Lessons for knowledge system evaluation include: the interests of multiple stakeholders, early consideration of evaluation and deployment, layered evaluation at system and component levels, characteristics of technology and domains that determine the appropriateness of controlled evaluations, implications of ‘in-the-wild’ versus variations of ‘in-the-lab’ evaluation, and the effect of technology-enabled functionality and its impact upon existing tools and work practices. In the conclusion, we discuss—through the lessons illustrated from this case study of intelligent knowledge system evaluation—how development and infusion of innovative technology must be supported by adequate evaluation of its efficacy.
international conference on advanced learning technologies | 2016
Karen L. Myers; Melinda T. Gervasio
A major impediment to the widespread deployment of intelligent training systems is the high cost of developing the content that drives their operation. Techniques grounded in end-user programming have shown great promise for reducing the burden of content creation. With these approaches, a domain expert demonstrates a solution to a task, which is then generalized to a broader model. This paper reports on a concept validation study that provides an empirical basis for the design of solution authoring frameworks based on end-user programming techniques. The study shows that non-expert users are comfortable with the approach and are capable of applying it to generate quality solution models. It also identifies constructs that, while important for accurate solution characterization, can lead to confusion and so warrant special care in tool design. Based on these results, we make recommendations for the design of solution-authoring tools in support of automated assessment for tutoring systems.