Leonard A. Breslow
United States Naval Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Leonard A. Breslow.
Knowledge Engineering Review | 1997
Leonard A. Breslow; David W. Aha
Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.
Applied Intelligence | 2001
David W. Aha; Leonard A. Breslow; Héctor Muñoz-Avila
Conversational case-based reasoning (CCBR) was the first widespread commercially successful form of case-based reasoning. Historically, commercial CCBR tools conducted constrained human-user dialogues and targeted customer support tasks. Due to their simple implementation of CBR technology, these tools were almost ignored by the research community (until recently), even though their use introduced many interesting applied research issues. We detail our progress on addressing three of these issues: simplifying case authoring, dialogue inferencing, and interactive planning. We describe evaluations of our approaches on these issues in the context of NaCoDAE and HICAP, our CCBR tools. In summary, we highlight important CCBR problems, evaluate approaches for solving them, and suggest alternatives to be considered for future research.
international conference on case based reasoning | 1997
David W. Aha; Leonard A. Breslow
Conversational case-based reasoning (CBR) shells (e.g., Inferences CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However, rather than eliminate the knowledge engineering bottleneck, they refocus it on case engineering, the task of carefully authoring cases according to library design guidelines to ensure good performance. Designing complex libraries according to these guidelines is difficult; software is needed to assist users with case authoring. We describe an approach for revising case libraries according to design guidelines, its implementation in Clire, and empirical results showing that, under some conditions, this approach can improve conversational CBR performance.
Lecture Notes in Computer Science | 1998
David W. Aha; Tucker Maney; Leonard A. Breslow
Dialogue inferencing is the knowledge-intensive process of inferring aspects of a users problem from its partial description. Conversational case-based reasoning (CCBR) systems, which interactively and incrementally elicit a users problem description, suffer from poor retrieval efficiency (i.e., they prompt the user with questions that the user has already implicitly answered) unless they perform dialogue inferencing. The standard method for dialogue inferencing in CCBR systems requires library designers to supply explicit inferencing rules. This approach is problematic (e.g., maintenance is difficult). We introduce an alternative approach in which the CCBR system guides the library designer in building a domain model. This model and the partial problem description are then given to a query retrieval system (PARKA-DB) to infer any implied answers during a conversation. In an initial empirical evaluation in the NaCoDAE CCBR tool, our approach improved retrieval efficiency without sacrificing retrieval precision.
Human Factors | 2014
Daniel Gartenberg; Leonard A. Breslow; J. Malcolm McCurry; J. Greg Trafton
Objective: We describe a novel concept, situation awareness recovery (SAR), and we identify perceptual and cognitive processes that characterize SAR. Background: Situation awareness (SA) is typically described in terms of perceiving relevant elements of the environment, comprehending how those elements are integrated into a meaningful whole, and projecting that meaning into the future. Yet SA fluctuates during the time course of a task, making it important to understand the process by which SA is recovered after it is degraded. Method: We investigated SAR using different types of interruptions to degrade SA. In Experiment 1, participants watched short videos of an operator performing a supervisory control task, and then the participants were either interrupted or not interrupted, after which SA was assessed using a questionnaire. In Experiment 2, participants performed a supervisory control task in which they guided vehicles to their respective targets and either experienced an interruption, during which they performed a visual search task in a different panel, or were not interrupted. Results: The SAR processes we identified included shorter fixation durations, increased number of objects scanned, longer resumption lags, and a greater likelihood of refixating on objects that were previously looked at. Conclusions: We interpret these findings in terms of the memory-for-goals model, which suggests that SAR consists of increased scanning in order to compensate for decay, and previously viewed cues act as associative primes that reactivate memory traces of goals and plans.
Journal of Experimental Psychology: Applied | 2009
Leonard A. Breslow; J. Gregory Trafton; Raj M. Ratwani
Previous research has shown that multicolored scales are superior to ordered brightness scales for supporting identification tasks on complex visualizations (categorization, absolute numeric value judgments, etc.), whereas ordered brightness scales are superior for relative comparison tasks (greater/less). We examined the processes by which such tasks are performed. By studying eye movements and by comparing performance on scales of different sizes, we argued that (a) people perform identification tasks by conducting a serial visual search of the legend, whose speed is sensitive to the number of scale colors and the discriminability of the colors; and (b) people perform relative comparison tasks using different processes for multicolored versus brightness scales. With multicolored scales, they perform a parallel search of the legend, whose speed is relatively insensitive to the size of the scale, whereas with brightness scales, people usually directly compare the target colors in the visualization, while making little reference to the legend. Performance of comparisons was relatively robust against increases in scale size, whereas performance of identifications deteriorated markedly, especially with brightness scales, once scale sizes reached 10 colors or more.
Human Factors | 2009
Leonard A. Breslow; Raj M. Ratwani; J. Gregory Trafton
Objective: Computational models of identification and relative comparison tasks performed on color-coded data visualizations were presented and evaluated against two experiments. In this context, the possibility of a dual-use color scale, useful for both tasks, was explored, and the use of the legend was a major focus. Background: Multicolored scales are superior to ordered brightness scales for identification tasks, such as determining the absolute numeric value of a represented item, whereas ordered brightness scales are superior for relative comparison tasks, such as determining which of two represented items has a greater value. Method: Computational models were constructed for these tasks, and their predictions were compared with the results of two experiments. Results: The models fit the experimental results well. A multicolored, brightness-ordered dual-use scale supported high accuracy on both tasks and fast responses on a comparison task but relatively slower responses on the identification task. Conclusion: Identification tasks are solved by a serial visual search of the legend, whose speed and accuracy are a function of the discriminability of the color scales. Comparison tasks with multicolored scales are performed by a parallel search of the legend; with brightness scales, comparison tasks are generally solved by a direct comparison between colors on the visualization, without reference to the legend. Finally, it is possible to provide users a dual-use color scale effective on both tasks. Application: Trade-offs that must typically be made in the design of color-coded visualizations between speed and accuracy or between identification and comparison tasks may be mitigated.
Lecture Notes in Computer Science | 2000
Rosina O. Weber; David W. Aha; Héctor Muñoz-Avila; Leonard A. Breslow
Lessons learned processes, and software systems that support them, have been developed by many organizations (e.g., all USA military branches, NASA, several Department of Energy organizations, the Construction Industry Institute). Their purpose is to promote the dissemination of knowledge gained from the experiences of an organizations employees. Unfortunately, lessons learned systems are usually ineffective because they invariably introduce new processes when, instead, they should be embedded into the processes that they are meant to improve. We developed an embedded case-based approach for lesson dissemination and reuse that brings lessons to the attention of users rather than requiring them to fetch lessons from a standalone software tool. We demonstrate this active lessons delivery architecture in the context of HICAP, a decision support tool for plan authoring. We also show the potential of active lessons delivery to increase plan quality for a new travel domain.
IEEE Transactions on Human-Machine Systems | 2014
Leonard A. Breslow; Daniel Gartenberg; J. Malcolm McCurry; J. Gregory Trafton
Crandall and Cummings & Mitchell introduced fan-out as a measure of the maximum number of robots a single human operator can supervise in a given single-human-multiple-robot system. Fan-out is based on the time constraints imposed by limitations of the robots and of the supervisor, e.g., limitations in attention. Adapting their work, we introduced a dynamic model of operator overload that predicts failures in supervisory control in real time, based on fluctuations in time constraints and in the supervisors allocation of attention, as assessed by eye fixations. Operator overload was assessed by damage incurred by unmanned aerial vehicles when they traversed hazard areas. The model generalized well to variants of the baseline task. We then incorporated the model into the system where it predicted in real time, when an operator would fail to prevent vehicle damage and alerted the operator to the threat at those times. These model-based adaptive cues reduced the damage rate by one-half relative to a control condition with no cues.
international conference on case based reasoning | 2003
J. William Murdock; David W. Aha; Leonard A. Breslow
Identifying potential terrorist threats is a crucial task, especially in our post 9/11 world. This task is performed by intelligence analysts, who search for threats in the context of an overwhelming amount of data. We describe AHEAD (Analogical Hypothesis Elaborator for Activity Detection), a knowledge-rich post-processor that analyzes automatically-generated hypotheses using an interpretive case-based reasoning methodology to help analysts understand and evaluate the hypotheses. AHEAD first attempts to retrieve a functional model of a process, represented in the Task-Method-Knowledge framework (Stroulia & Goel, 1995; Murdock & Goel, 2001), to identify the context of a given hypothesized activity. If retrieval succeeds, AHEAD then determines how the hypothesis instantiates the process. Finally, AHEAD generates arguments that explain how the evidence justifies and/or contradicts the hypothesis according to this instantiated process. Currently, we have implemented AHEADs case (i.e., model) retrieval step and its user interface for displaying and browsing arguments in a human-readable form. In this paper, we describe AHEAD and detail its first evaluation. We report positive results including improvements in speed, accuracy, and confidence for users analyzing hypotheses about detected threats.