Michael W. Floyd
Carleton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael W. Floyd.
international conference on tools with artificial intelligence | 2011
Michael W. Floyd; Babak Esfandiari
Most realistic environments are complex, partially observable and impose real-time constraints on agents operating within them. This paper describes a framework that allows agents to learn by observation in such environments. When learning by observation, agents observe an expert performing a task and learn to perform the same task based on those observations. Our framework aims to allow agents to learn in a variety of domains (physical or virtual) regardless of the behaviour or goals of the observed expert. To achieve this we ensure that there is a clear separation between the central reasoning system and any domain-specific information. We present case studies in the domains of obstacle avoidance, robotic arm control, simulated soccer and Tetris.
international conference on case based reasoning | 2009
Michael W. Floyd; Babak Esfandiari
When learning by observing an expert, cases can be automatically generated in an inexpensive manner. However, since this is a passive method of learning the observer has no control over which problems are solved and this can result in case bases that do not contain a representative distribution of the problem space. In order to overcome this we present a method to incorporate active learning with learning by observation . Problems that are not covered by the current case base are automatically detected, during runtime or by examining secondary case bases, and presented to an expert to be solved. However, we show that these problems can not be presented to the expert individually but need to be part of a sequence of problems. Creating this sequence of cases is non-trivial, and an approach to creating such sequences is described. Experimental results, in the domain of simulated soccer, show our approach to be useful not only for increasing the problem coverage of the case base but also in creating cases with rare solutions.
international conference on case-based reasoning | 2014
Swaroop S. Vattam; David W. Aha; Michael W. Floyd
We present SET-PR, a novel case-based plan recognition algorithm that is tolerant to missing and misclassified actions in its input action sequences. SET-PR uses a novel representation called action sequence graphs to represent stored plans in its plan library and a similarity metric that uses a combination of graph degree sequences and object similarity to retrieve relevant plans from its library. We evaluated SET-PR by measuring plan recognition convergence and precision with increasing levels of missing and misclassified actions in its input. In our experiments, SET-PR tolerated 20%-30% of input errors without compromising plan recognition performance.
international conference on case-based reasoning | 2014
Michael W. Floyd; Michael Drinkwater; David W. Aha
Robots can be important additions to human teams if they improve team performance by providing new skills or improving existing skills. However, to get the full benefits of a robot the team must trust and use it appropriately. We present an agent algorithm that allows a robot to estimate its trustworthiness and adapt its behavior in an attempt to increase trust. It uses case-based reasoning to store previous behavior adaptations and uses this information to perform future adaptations. We compare case-based behavior adaptation to behavior adaptation that does not learn and show it significantly reduces the number of behaviors that need to be evaluated before a trustworthy behavior is found. Our evaluation is in a simulated robotics environment and involves a movement scenario and a patrolling/threat detection scenario.
international conference on computational science and its applications | 2014
Michael W. Floyd; Michael Drinkwater; David W. Aha
Robots are added to human teams to increase the team’s skills or capabilities. To gain the acceptance of the human teammates, it may be important for the robot to behave in a manner that the teammates consider trustworthy. We present an approach that allows a robot’s behavior to be adapted so that it behaves in a trustworthy manner. The adaptation is guided by an inverse trust metric that the robot uses to estimate the trust a human teammate has in it. We evaluate our method in a simulated robotics domains and demonstrate how the agent can adapt to a teammate’s preferences.
international conference on case-based reasoning | 2015
Michael W. Floyd; Michael Drinkwater; David W. Aha
It is important for robots to be trusted by their human teammates so that they are used to their full potential. This paper focuses on robots that can estimate their own trustworthiness based on their performance and adapt their behavior to engender trust. Ideally, a robot can receive feedback about its performance from teammates. However, that feedback can be sporadic or non-existent (e.g., if teammates are busy with their own duties), or come in a variety of forms (e.g., different teammates using different vocabularies). We describe a case-based algorithm that allows a robot to learn a model of feedback and use that model to adapt its behavior. We evaluate our system in a simulated robotics domain by showing that a robot can learn a model of operator feedback and use that model to improve behavior adaptation.
international conference on case-based reasoning | 2016
Michael W. Floyd; David W. Aha
An important consideration in human-robot teams is ensuring that the robot is trusted by its teammates. Without adequate trust, the robot may be underutilized or disused, potentially exposing human teammates to dangerous situations. We have previously investigated an agent that can assess its own trustworthiness and adapt its behavior accordingly. In this paper we extend our work by adding a transparency layer that allows the agent to explain why it adapted its behavior. The agent uses explanations based on explicit feedback received from an operator. This allows it to provide simple, concise, and understandable explanations. We evaluate our system on scenarios from a simulated robotics domain by demonstrating that the agent can provide explanations that closely align with an operator’s feedback.
international conference on case-based reasoning | 2015
Hayley Borck; Justin Karneeb; Michael W. Floyd; Ron Alford; David W. Aha
We present the Policy and Goal Recognizer (PaGR), a case-based system for multiagent keyhole recognition. PaGR is a knowledge recognition component within a decision-making agent that controls simulated unmanned air vehicles in Beyond Visual Range combat. PaGR stores in a case the goal, observations, and policy of a hostile aircraft, and uses cases to recognize the policies and goals of newly-observed hostile aircraft. In our empirical study of PaGR’s performance, we report evidence that knowledge of an adversary’s goal improves policy recognition. We also show that PaGR can recognize when its assumptions about the hostile agent’s goal are incorrect, and can often correct these assumptions. We show that this ability improves PaGR’s policy recognition performance in comparison to a baseline algorithm.
international joint conference on artificial intelligence | 2017
Michael W. Floyd; Justin Karneeb; Philip Moore; David W. Aha
We describe the Tactical Battle Manager (TBM), an intelligent agent that uses several integrated artificial intelligence techniques to control an autonomous unmanned aerial vehicle in simulated beyond-visual-range (BVR) air combat scenarios. The TBM incorporates goal reasoning, automated planning, opponent behavior recognition, state prediction, and discrepancy detection to operate in a real-time, dynamic, uncertain, and adversarial environment. We describe evidence from our empirical study that the TBM significantly outperforms an expert-scripted agent in BVR scenarios. We also report the results of an ablation study which indicates that all components of our agent architecture are needed to maximize mission performance.
international conference on case-based reasoning | 2017
Michael W. Floyd; Justin Karneeb; David W. Aha
For an agent to act intelligently in a multi-agent environment it must model the capabilities of other agents. In adversarial environments, like the beyond-visual-range air combat domain we study in this paper, it may be possible to get information about teammates but difficult to obtain accurate models of opponents. We address this issue by designing an agent to learn models of aircraft and missile behavior, and use those models to classify the opponents’ aircraft types and weapons capabilities. These classifications are used as input to a case-based reasoning (CBR) system that retrieves possible opponent team configurations (i.e., the aircraft type and weapons payload per opponent). We describe evidence from our empirical study that the CBR system recognizes opponent team behavior more accurately than using the learned models in isolation. Additionally, our CBR system demonstrated resilience to limited classification opportunities, noisy air combat scenarios, and high model error.