Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martha E. Pollack is active.

Publication


Featured researches published by Martha E. Pollack.


computational intelligence | 1988

Plans and Resource-Bounded Practical Reasoning

Michael E. Bratman; David J. Israel; Martha E. Pollack

An architecture for a rational agent must allow for means‐end reasoning, for the weighing of competing alternatives, and for interactions betwen these two forms of reasoning. Such an architecture must also address the problem of resource boundedness. We sketch a solution of the first problem that points the way to a solution of the second. In particular, we present a high‐level specification of the practical‐reasoning component of an architecture for a resource‐bounded rational agent. In this architecture, a major role of the agents plans is to constrain the amount of further practical reasoning she must perform.


Robotics and Autonomous Systems | 2003

Towards robotic assistants in nursing homes: Challenges and results

Joelle Pineau; Michael Montemerlo; Martha E. Pollack; Nicholas Roy; Sebastian Thrun

Abstract This paper describes a mobile robotic assistant, developed to assist elderly individuals with mild cognitive and physical impairments, as well as support nurses in their daily activities. We present three software modules relevant to ensure successful human–robot interaction: an automated reminder system; a people tracking and detection system; and finally a high-level robot controller that performs planning under uncertainty by incorporating knowledge from low-level modules, and selecting appropriate courses of actions. During the course of experiments conducted in an assisted living facility, the robot successfully demonstrated that it could autonomously provide reminders and guidance for elderly residents.


intelligent agents | 1998

The Belief-Desire-Intention Model of Agency

Michael P. Georgeff; Barney Pell; Martha E. Pollack; Milind Tambe; Michael Wooldridge

Within the ATAL community, the belief-desire-intention (BDI) model has come to be possibly the best known and best studied model of practical reasoning agents. There are several reasons for its success, but perhaps the most compelling are that the BDI model combines a respectable philosophical model of human practical reasoning, (originally developed by Michael Bratman [1]), a number of implementations (in the IRMA architecture [2] and the various PRS-like systems currently available [7]), several successful applications (including the now-famous fault diagnosis system for the space shuttle, as well as factory process control systems and business process management [8]), and finally, an elegant abstract logical semantics, which have been taken up and elaborated upon widely within the agent research community [14, 16].


Robotics and Autonomous Systems | 2003

Autominder: an intelligent cognitive orthotic system for people with memory impairment

Martha E. Pollack; Laura E. Brown; Dirk Colbry; Colleen E. McCarthy; Cheryl Orosz; Bart Peintner; Sailesh Ramakrishnan; Ioannis Tsamardinos

The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project).


international conference on software engineering | 2001

Hierarchical GUI test case generation using automated planning

Atif M. Memon; Martha E. Pollack; Mary Lou Soffa

The widespread use of GUIs for interacting with software is leading to the construction of more and more complex GUIs. With the growing complexity come challenges in testing the correctness of a GUI and its underlying software. We present a new technique to automatically generate test cases for GUIs that exploits planning, a well-developed and used technique in artificial intelligence. Given a set of operators, an initial state, and a goal state, a planner produces a sequence of the operators that will transform the initial state to the goal state. Our test case generation technique enables efficient application of planning by first creating a hierarchical model of a GUI based on its structure. The GUI model consists of hierarchical planning operators representing the possible events in the GUI. The test designer defines the preconditions and effects of the hierarchical operators, which are input into a plan-generation system. The test designer also creates scenarios that represent typical initial and goal states for a GUI user. The planner then generates plans representing sequences of GUI interactions that a user might employ to reach the goal state from the initial state. We implemented our test case generation system, called Planning Assisted Tester for Graphical User Interface Systems (PATHS) and experimentally evaluated its practicality and effectiveness. We describe a prototype implementation of PATHS and report on the results of controlled experiments to generate test cases for Microsofts WordPad.


foundations of software engineering | 2001

Coverage criteria for GUI testing

Atif M. Memon; Mary Lou Soffa; Martha E. Pollack

A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of todays software. GUIs have characteristics different from traditional software, and conventional testing techniques do not directly apply to GUIs. This papers focus is on coverage critieria for GUIs, important rules that provide an objective measure of test quality. We present new coverage criteria to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any non-trivial GUI is extremely large, the GUIs hierarchical structure is exploited to identify the important event sequences to be tested. A GUI is decomposed into GUI components, each of which is used as a basic unit of testing. A representation of a GUI component, called an event-flow graph, identifies the interaction of events within a component and intra-component criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree, and inter-component coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are given to construct event-flow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates the usefulness of the coverage report to guide further testing and an important correlation between event-based coverage of a GUI and statement coverage of its softwares underlying code.


meeting of the association for computational linguistics | 1986

A MODEL OF PLAN INFERENCE THAT DISTINGUISHES BETWEEN THE BELIEFS OF ACTORS AND OBSERVERS

Martha E. Pollack

Existing models of plan inference (PI) in conversation have assumed that the agent whose plan is being inferred (the actor) and the agent drawing the inference (the observer) have identical beliefs about actions in the domain. I argue that this assumption often results in failure of both the PI process and the communicative process that PI is meant to support. In particular, it precludes the principled generation of appropriate responses to queries that arise from invalied plans. I describe a model of PI that abandons this assumption. It rests on an analysis of plans as mental phenomena. Judgements that a plan is invalid are associated with particular discrepancies between the beliefs that the observer ascribes to the actor when the former believes that the latter has some plan, and the beliefs that the observer herself holds. I show that the content of an appropriate response to a query is affected by the types of any such discrepancies of belief judged to be present in the plan inferred to underlie that query. The PI model described here has been implemented in SPIRIT, a small demonstration system that answers questions about the domain of computer mail.


Ai Magazine | 1993

Benchmarks, test beds, controlled experimentation, and the design of agent architectures

Steve Hanks; Martha E. Pollack; Paul R. Cohen

The methodological underpinnings of AI are slowly changing. Benchmarks, test beds, and controlled experimentation are becoming more common. Although we are optimistic that this change can solidify the science of AI, we also recognize a set of difficult issues concerning the appropriate use of this methodology. We discuss these issues as they relate to research on agent design. We survey existing test beds for agents and argue for appropriate caution in their use. We end with a debate on the proper role of experimental methodology in the design and validation of planning agents.


foundations of software engineering | 2000

Automated test oracles for GUIs

Atif M. Memon; Martha E. Pollack; Mary Lou Soffa

Graphical User Interfaces (GUIs) are critical components of todays software. Because GUIs have different characteristics than traditional software, conventional testing techniques do not apply to GUI software. In previous work, we presented an approach to generate GUI test cases, which take the form of sequences of actions. In this paper we develop a test oracle technique to determine if a GUI behaves as expected for a given test case. Our oracle uses a formal model of a GUI, expressed as sets of objects, object properties, and actions. Given the formal model and a test case, our oracle automatically derives the expected state for every action in the test case. We represent the actual state of an executing GUI in terms of objects and their properties derived from the GUIs execution. Using the actual state acquired from an execution monitor, our oracle automatically compares the expected and actual states after each action to verify the correctness of the GUI for the test case. We implemented the oracle as a component in our GUI testing system, called Planning Assisted Tester for grapHical user interface Systems (PATHS), which is based on AI planning. We experimentally evaluated the practicality and effectiveness of our oracle technique and report on the results of experiments to test and verify the behavior of our version of the Microsoft WordPads GUI.


Ai Magazine | 2007

An intelligent personal assistant for task and time management

Karen L. Myers; Pauline M. Berry; Jim Blythe; Ken Conley; Melinda T. Gervasio; Deborah L. McGuinness; David N. Morley; Avi Pfeffer; Martha E. Pollack; Milind Tambe

We describe an intelligent personal assistant that has been developed to aid a busy knowledge worker in managing time commitments and performing tasks. The design of the system was motivated by the complementary objectives of (1) relieving the user of routine tasks, thus allowing her to focus on tasks that critically require human problem-solving skills, and (2) intervening in situations where cognitive overload leads to oversights or mistakes by the user. The system draws on a diverse set of AI technologies that are linked within a Belief-Desire-Intention (BDI) agent system. Although the system provides a number of automated functions, the overall framework is highly user centric in its support for human needs, responsiveness to human inputs, and adaptivity to user working style and preferences.

Collaboration


Dive into the Martha E. Pollack's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerry Morgan

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge