Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Walter Warwick is active.

Publication


Featured researches published by Walter Warwick.


Journal of Cognitive Engineering and Decision Making | 2009

Convergence and Constraints Revealed in a Qualitative Model Comparison

Christian Lebiere; Cleotilde Gonzalez; Walter Warwick

We contrasted and compared independently developed computational models of human performance in a common dynamic decision-making task. The task, called dynamic stocks and flows, is simple and tractable enough for laboratory experiments yet exhibits many characteristics of macrocognition. A macrocognitive model was developed using a computational instantiation of recognition-primed decision making. A microcognitive model was developed using the Adaptive Control of Thought – Rational (ACT-R) cognitive architecture. Both models followed an instance-based learning paradigm and displayed striking similarities, including their constraints, limitations, and the key breakthrough that enabled satisfactory (though still short of human-like) performance, suggesting the emergence of a general design pattern. On the basis of this comparison we argue that although some substantive differences remain, microcognitive and macrocognitive approaches provide complementary rather than contradictory accounts of human behavior.


artificial general intelligence | 2010

Editorial: Cognitive Architectures, Model Comparison and AGI

Christian Lebiere; Cleotilde Gonzalez; Walter Warwick

Editorial: Cognitive Architectures, Model Comparison and AGI Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2010

Advances in Modeling Situation Awareness, Decision Making, and Performance in Complex Operational Environments

Cheryl A. Bolstad; Haydee M. Cuevas; Erik S. Connors; Cleotilde Gonzalez; Peter W. Foltz; Nathan Ka Ching Lau; Walter Warwick

As organizations continue to evolve and integrate even more advanced information technology capabilities, traditional cognitive models of human performance, both at the individual and team level, must similarly mature in order to flexibly adapt to the challenges faced by teams performing in todays complex operational environments. The overall goal of this panel session will be to highlight advances made in modeling situation awareness, decision making, and performance in a variety of domains and applications. The panelists draw from their varied experiences in academia and industry to offer their commentary on a diverse range of approaches to modeling these complex cognitive processes. The panel also seeks to identify critical areas warranting further investigation.


International Conference on Applied Human Factors and Ergonomics | 2018

No Representation Without Integration! Better Cognitive Modeling Through Interoperability

Walter Warwick; Christian Lebiere; Stuart Rodgers

Historically, cognitive modeling has been an exercise in theory confirmation. “Cognitive architectures” were advanced as computational instantiations of theories that could be used to model various aspects of cognition and then be put to empirical test by comparing the simulation-based predictions of the model against the actual performance of human subjects. More recently, cognitive architectures have been recognized as potentially valuable tools in the development of software agents—intelligent routines that can either mimic or support human performance in complex domains. While the introduction of cognitive architectures to what has been regarded as the exclusive province of artificial intelligence is a welcome turn, the history of cognitive modeling casts a long shadow. In particular, there is a tendency to apply cognitive architectures as monolithic, one-off solutions. This runs counter to many of the best practices of modern software engineering, which puts a premium on developing modular and reusable solutions. This paper describes the development of a novel software infrastructure that supports interoperability among cognitive architectures.


International Conference on Applied Human Factors and Ergonomics | 2018

An Integrated Model of Human Cyber Behavior

Walter Warwick; Norbou Buchler; Laura Marusich

Agent-based models are commonplace in the simulation-based analysis of cyber security. But as useful as it is to model, for example, adversarial tactics in a simulated cyber attack or realistic traffic in a study of network vulnerability, it is increasingly clear that human error is one of the greatest threats to cyber security. From this perspective, the salient features of behavior are those of an agent making decisions about how to use a system, rather than an agent acting as an adversary or as a “chat bot” which functions merely as a statistical message generator. In this paper, we describe work to model a human dimension of the cyber operator, a user subject to different motivations that lead directly to differences in cyber behavior which, ultimately, lead to differences in the risk of suffering a “drive-by” malware infection.


Archive | 2017

Integrating Heterogeneous Modeling Frameworks Using the DREAMIT Workspace

Walter Warwick; Matthew Walsh; Stu Rodgers; Christian Lebiere

The history of agent development is a litany of expensive one-off solutions that are opaque to the uninitiated, difficult to maintain and impossible to re-use in novel contexts. This outcome is the unfortunate result of a tendency to apply monolithic “architectures” to agent development, which require specialists to build the models and extensive knowledge engineering and hand tuning to realize adequate performance. To address these shortcomings, we are developing methods to align agent development with best practices in software engineering. In this paper we describe an approach that promotes modularity and learning in the development and validation of intelligent agents. Specifically, our approach enables the modeler to decompose intelligent behavior as required by the problem (rather than the modeling environment), implement component behaviors using the tool best suited to those requirements and close the data loop between agent and environment early in the development process rather than as a post hoc validation step.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

General Methods for Communicating the Structure and Content of a Cognitive Model

Walter Warwick; Christian Lebiere; Randolph M. Jones; David Reitter; Stuart Rodgers; Scott A. Douglas

The modeling and simulation of human performance forces the analyst to confront a range of well-known but difficult challenges. One challenge the analyst does not seem to face is a shortage of human performance modeling tools. But because there is no uniform framework for expressing the content and structure of a human performance model, it is difficult to understand what is at stake in the implementation of a given model and all but impossible to compare and contrast different models despite the proliferation of quantitative modeling tools. The inability to communicate model structure and content is not just a practical shortcoming: it is a major impediment to assessing the validity, plausibility, and extensibility of human performance models. The latter aspect is particularly important as it prevents the incremental construction of large human performance models along standard software engineering practices. The goal of this panel discussion is to review past and ongoing efforts to develop general languages that specify cognitive models at a functional level of description. We do not expect a standard to emerge from this discussion, but rather we hope to canvass both the theoretical and practical issues that confront any attempt to develop a uniform language that describes different modeling frameworks.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

Complex Systems and Human Performance Modeling

Walter Warwick; Laura Marusich; Norbou Buchler

The development of a human performance model is an exercise in complexity. Despite this, techniques that are commonplace in the study of complex dynamical systems have yet to find their way into human performance modeler’s toolbox. In this paper, we describe our efforts to develop new generative and analytical methods within a task network modeling environment. Specifically, we present task network modeling techniques for generating inter-event times series typical of a complex system. We focus on communication patterns. In addition, we describe the associated analytical techniques needed to verify the time series. Again, while these analytical techniques will be familiar to the complexity scientist, they have significant and largely unrecognized methodological implications for the human performance modeler.


Archive | 2011

Virtual and Constructive Simulations with the GRBIL Modeling Tool

Michael Matessa; Walter Warwick

The Graph-Based Interface Language (GRBIL) tool combines aspects of virtual and constructive simulations. GRBIL can be used to set up a virtual simulation where people can interact with a simulation of an operator interface and environment. Human-in-the-loop activity can be recorded when a person performs a procedure with the simulated interface. This activity can then be automatically compiled into an operator model that can be used in constructive simulations where the operator model interacts with the simulated interface. The operator model can then make human performance predictions.


Journal of Cognitive Engineering and Decision Making | 2009

Editors' Introduction to the Special Issue on Developing and Understanding Computational Models of Macrocognition

Walter Warwick; Laurel Allender; John Yen

NEWELL AND SIMON (1976) ONCE FAMOUSLY CHARACTERIZED COMPUTER SCIENCE AS AN empirical inquiry—not merely an engineering discipline or a branch of applied mathematics but a science concerned with the discovery of the “essential nature” of symbol systems. Even more famously, they presented the physical symbol system and heuristic search hypotheses as candidate laws of the qualitative structure of intelligent systems. Their claims were not just about computers but about the nature of intelligence in general—namely, that the realization of a physical symbol system provides both necessary and sufficient conditions for intelligence and that the hallmark of such intelligence is the efficient search and testing of solutions within a problem space. It would be hard to overstate the impact of these hypotheses within the fields of artificial intelligence and cognitive science, as researchers since then have either elaborated these hypotheses or reacted to them. Unfortunately, the impact of Newell and Simon’s (1976) testable claims about the nature of intelligence has overshadowed their equally significant claims about the empirical nature of computer science and the implications for cognitive modeling. On one hand, cognitive modelers have taken to heart Newell and Simon’s idea that a computer program is itself an experiment. This has led naturally to the view of computational “cognitive architectures” as theories, the emphasis on quantitative prediction, and the comparison of those predictions with human performance data drawn from well-controlled laboratory experiments. For many, cognitive science is at its most scientific when given expression in such computational terms. On the other hand, cognitive modelers have had comparatively little to say about computer simulation itself as an object of empirical study—that is, understanding how computer simulations function as experiments—a necessary step for justifying the theoretical claims drawn from computational cognitive models. In the physical sciences, volume upon volume has been written not just about experimental design but about experimental devices, their design, their sensitivity, the scope and limits of their operations, and their suitability in exploring particular

Collaboration


Dive into the Walter Warwick's collaboration.

Top Co-Authors

Avatar

Christian Lebiere

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Santamaria

Alion Science and Technology

View shared research outputs
Top Co-Authors

Avatar

David Reitter

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Erik S. Connors

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Haydee M. Cuevas

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

John Yen

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Michael Matessa

Alion Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Peter W. Foltz

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge