Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael T. Cox is active.

Publication


Featured researches published by Michael T. Cox.


Knowledge Engineering Review | 2005

Retrieval, reuse, revision and retention in case-based reasoning

Ramon López de Mántaras; David McSherry; Derek G. Bridge; David B. Leake; Barry Smyth; Susan Craw; Boi Faltings; Mary Lou Maher; Michael T. Cox; Kenneth D. Forbus; Mark T. Keane; Agnar Aamodt; Ian D. Watson

Case-based reasoning (CBR) is an approach to problem solving that emphasizes the role of prior experience during future problem solving (i.e., new problems are solved by reusing and if necessary adapting the solutions to similar problems that were solved in the past). It has enjoyed considerable success in a wide variety of problem solving tasks and domains. Following a brief overview of the traditional problem-solving cycle in CBR, we examine the cognitive science foundations of CBR and its relationship to analogical reasoning. We then review a representative selection of CBR research in the past few decades on aspects of retrieval, reuse, revision and retention.


Artificial Intelligence | 2005

Metacognition in computation: a selected research review

Michael T. Cox

Various disciplines have examined the many phenomena of metacognition and have produced numerous results, both positive and negative. I discuss some of these aspects of cognition about cognition and the results concerning them from the point of view of the psychologist and the computer scientist, and I attempt to place them in the context of computational theories. I examine metacognition with respect to both problem solving (e.g., planning) and to comprehension (e.g., story understanding) processes of cognition.


Artificial Intelligence | 1999

Introspective multistrategy learning: on the construction of learning strategies

Michael T. Cox; Ashwin Ram

Abstract A central problem in multistrategy learning systems is the selection and sequencing of machine learning algorithms for particular situations. This is typically done by the system designer who analyzes the learning task and implements the appropriate algorithm or sequence of algorithms for that task. We propose a solution to this problem which enables an AI system with a library of machine learning algorithms to select and sequence appropriate algorithms autonomously. Furthermore, instead of relying on the system designer or user to provide a learning goal or target concept to the learning system, our method enables the system to determine its learning goals based on analysis of its successes and failures at the performance task. The method involves three steps: Given a performance failure, the learner examines a trace of its reasoning prior to the failure to diagnose what went wrong (blame assignment); given the resultant explanation of the reasoning failure, the learner posts explicitly represented learning goals to change its background knowledge (deciding what to learn); and given a set of learning goals, the learner uses nonlinear planning techniques to assemble a sequence of machine learning algorithms, represented as planning operators, to achieve the learning goals (learning-strategy construction). In support of these operations, we define the types of reasoning failures, a taxonomy of failure causes, a second-order formalism to represent reasoning traces, a taxonomy of learning goals that specify desired change to the background knowledge of a system, and a declarative task-formalism representation of learning algorithms. We present the Meta-AQUA system, an implemented multistrategy learner that operates in the domain of story understanding. Extensive empirical evaluations of Meta-AQUA show that it performs significantly better in a deliberative, planful mode than in a reflexive mode in which learning goals are ablated and, furthermore, that the arbitrary ordering of learning algorithms can lead to worse performance than no learning at all. We conclude that explicit representation and sequencing of learning goals is necessary for avoiding negative interactions between learning algorithms that can lead to less effective learning.


Ai Magazine | 2007

Perpetual Self-Aware Cognitive Agents

Michael T. Cox

To construct a perpetual self-aware cognitive agent that can continuously operate with independence, an introspective machine must be produced. To assemble such an agent, it is necessary to perform a full integration of cognition (planning, understanding, and learning) and metacognition (control and monitoring of cognition) with intelligent behaviors. The failure to do this completely is why similar, more limited efforts have not succeeded in the past. I outline some key computational requirements of metacognition by describing a multi- strategy learning system called Meta-AQUA and then discuss an integration of Meta-AQUA with a nonlinear state-space planning agent. I show how the resultant system, INTRO, can independently generate its own goals, and I relate this work to the general issue of self-awareness by machine.


Knowledge Engineering Review | 2005

Case-based planning

Michael T. Cox; Héctor Muñoz-Avila; Ralph Bergmann

We briefly examine case-based planning starting with the seminal work of Hammond. Derivational analogy represents an important shift of technical emphasis that helped mature the techniques. The choice of abstraction level is equally important. We conclude by discussing theoretical underpinnings and by providing some pointers to current directions.


Knowledge Engineering Review | 2005

Case-based reasoning-inspired approaches to education

Janet L. Kolodner; Michael T. Cox; Pedro A. González-Calero

This commentary briefly reviews work on the application of case-based reasoning (CBR) to the design and construction of educational approaches and computer-based teaching systems. The CBR cognitive model is at the core of constructivist learning approaches such as Goal-based Scenarios and Learning by Design. Case libraries can play roles as intelligent resources while learning and frameworks for articulating ones understanding. More recently, CBR techniques have been applied to design and construction of simulation-based learning systems and serious games. The main ideas of CBR are explained and pointers to relevant references are provided, both for finished work and on-going research.


Ai Magazine | 2007

Seven Aspects of Mixed-Initiative Reasoning:An Introduction to this Special Issue on Mixed-Initiative Assistants

Gheorghe Tecuci; Michael T. Cox

Mixed-initiative assistants are agents that interact seamlessly with humans to extend their problem-solving capabilities or provide new capabilities. Developing such agents requires the synergistic integration of many areas of AI, including knowledge representation, problem solving and planning, knowledge acquisition and learning, multiagent systems, discourse theory, and human-computer interaction. This paper introduces seven aspects of mixed-initiative reasoning (task, control, awareness, communication, personalization, architecture, and evaluation) and discusses them in the context of several state-of-the-art mixed-initiative assistants. The goal is to provide a framework for understanding and comparing existing mixed-initiative assistants and for developing general design principles and methods.


international conference on case based reasoning | 1997

Supporting Combined Human and Machine Planning: An Interface for Planning by Analogical Reasoning

Michael T. Cox; Manuela M. Veloso

Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information.


Control and Intelligent Systems | 2006

Case-based plan recognition with novel input

Michael T. Cox; Boris Kerkez

Our research investigates a case-based approach to plan recognition using incomplete incrementally learned plan libraries. To learn plan libraries, one must be able to process novel input. Retrieval based on similarities among concrete planning situations rather than among planning actions enables recognition despite the occurrence of newly observed planning actions and states. In addition, we explore the benefits of predictions using a measure that we call abstract similarity. Abstract similarity is used when a concrete state maps to no known abstract state. Instead a search is performed for nearby abstract states based on a nearest neighbour technique. Such a retrieval scheme enables accurate prediction in light of extremely novel observed situations. The properties of retrieval in abstract state-spaces are investigated in three standard planning domains. We first determine optimal radii to use that determines a spherical sub-hyperspace that limits the search. Experimental results then show that significant improvements in the recognition process are obtained using abstract similarity.


international conference on case based reasoning | 2001

Incremental Case-Based Plan Recognition Using State Indices

Boris Kerkez; Michael T. Cox

We describe a case-based approach to the keyhole plan-recognition task where the observed agent is a state-space planner whose world states can be monitored. Case-based approach provides means for automatically constructing the plan library from observations, minimizing the number of extraneous plans in the library. We show that the knowledge about the states of the observed agents world can be effectively used to recognize agents plans and goals, given no direct knowledge about the planners internal decision cycle. Cases (plans) containing state knowledge enable the recognizer to cope with novel situations for which no plans exist in the plan library, and to further assist in effective discrimination among competing plan hypothesis.

Collaboration


Dive into the Michael T. Cox's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashwin Ram

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Boris Kerkez

Wright State University

View shared research outputs
Top Co-Authors

Avatar

Manuela M. Veloso

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Oates

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gifty Edwin

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge