Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven R. Haynes is active.

Publication


Featured researches published by Steven R. Haynes.


conference on computer supported cooperative work | 2004

Situating evaluation in scenarios of use

Steven R. Haynes; Sandeep Purao; Amie L. Skattebo

We report on the use of scenario-based methods for evaluating collaborative systems. We describe the method, the case study where it was applied, and provide results of its efficacy in the field. The results suggest that scenario-based evaluation is effective in helping to focus evaluation efforts and in identifying the range of technical, human, organizational and other contextual factors that impact system success. The method also helps identify specific actions, for example, prescriptions for design to enhance system effectiveness. However, we found the method somewhat less useful for identifying the measurable benefits gained from a CSCW implementation, which was one of our primary goals. We discuss challenges faced applying the technique, suggest recommendations for future research, and point to implications for practice.


Journal of Homeland Security and Emergency Management | 2008

Emergency Management Planning as Collaborative Community Work

Wendy A. Schafer; John M. Carroll; Steven R. Haynes; Stephen Abrams

Emergencies often have causes and effects that are global. However, emergencies are also inherently local: They occur in a particular place and point in time. While it is critical for governments and society to better organize emergency management top-down, it is also important to become more aware of local community-level values, planning, involvement, knowledge, and skill. Local communities plan collaboratively for potential emergencies of varying scales.Our discipline of Human-Computer Interaction studies the interaction between people and computers. Researchers in this field consider how information technology affects emergency management. They aim to improve emergency management through the design of useful and novel interfaces to technology. The purpose of our work was to take a broader perspective on emergency management and investigate the models and patterns of emergency-related work practices. In particular, we examined emergency management from a local community perspective. This focus on local communities partly stems from our prior research on community groups and their use of information technology. It is also motivated by the realization that emergencies are local events, which happen in communities.This paper reports on a study of one communitys emergency planning activities. Five aspects of community preparedness are discussed: collaborative efforts, local area details, local culture, geographic information, and emergency plans, and a case study provides concrete examples of each. Local community preparedness is complex and gives rise to many collaboration issues. Revealing this complexity, the paper offers some implications for community emergency management technology.


conference on computer supported cooperative work | 2009

Scenario-Based Methods for Evaluating Collaborative Systems

Steven R. Haynes; Sandeep Purao; Amie L. Skattebo

Evaluating collaborative systems remains a significant challenge. Most evaluation methods approach the problem from one of two extremes: focused evaluation of specific system features, or broad ethnographic investigations of system use in context. In this paper, we develop and demonstrate a middle ground for evaluation: explicit reflections on scenarios of system use coupled with analysis of the consequences of these use scenarios, represented as claims. Extending prior work in scenario-based design and claims analysis, we develop a framework for a multi-perspective, multi-level evaluation of collaborative systems called SWIMs: scenario walkthrough and inspection methods. This approach is centered on the capture, aggregation, and analysis of users’ reflections on system support for specific scenarios. We argue that this approach not only provides the contextual sensitivity and use centricity of ethnographic techniques, but also sufficient structure for method replication, which is common to more feature-based evaluation techniques. We demonstrate with an extensive field study how SWIMs can be used for summative assessment of system performance and organizational contributions, and formative assessment to guide system and feature re-design. Results from the field study provide preliminary indications of the method’s effectiveness and suggest directions for future research.


systems and information engineering design symposium | 2005

Increasing efficiency of the development of user models

Geoffrey P. Morgan; Steven R. Haynes; Frank E. Ritter; Mark A. Cohen

This paper introduces Herbal, a high-level behavior representation language for creating AI agents and cognitive models. It describes the lessons from other high-level modeling languages that informed the design of Herbal and that will inform other high-level behavior representation languages. We describe a model built in Herbal to illustrate its use and application. The paper concludes that languages like Herbal can help explain the design intent of intelligent agents and cognitive models, and make them easier to create, modify, and understand. These results appear to be particularly true where the model reuses a lot of its own structures.


Ai Magazine | 2010

Applying Software Engineering to Agent Development

Mark A. Cohen; Frank E. Ritter; Steven R. Haynes

Developing intelligent agents and cognitive models is a complex software engineering activity. This article shows how all intelligent agent creation tools can be improved by taking advantage of established software engineering principles such as high-level languages, maintenance-oriented development environments, and software reuse. We describe how these principles have been realized in the Herbal integrated development environment, a collection of tools that allows agent developers to exploit modern software engineering principles.


human factors in computing systems | 2009

Design research as explanation: perceptions in the field

Steven R. Haynes; John M. Carroll; Thomas George Kannampallil; Lu Xiao; Paula M. Bach

We report results from interviews with HCI design researchers on their perceptions of how their research relates to the more traditional scientific goal of providing explanations. Theories of explanation are prominent in the physical and natural sciences, psychology, the social sciences, and engineering. Little work though has so-far addressed the special case of how results from reflective design of interactive systems can help provide explanations. We found conceptions of explanation in design research to be broader and more inclusive than those commonly found in the philosophy of science. We synthesized concepts from the interviews into a framework which may help researchers understand how their contributions relate to both classical and emergent conceptions of explanation.


designing interactive systems | 2006

Collaborative architecture design and evaluation

Steven R. Haynes; Amie L. Skattebo; Jonathan A. Singel; Mark A. Cohen; Jodi L. Himelright

In this paper we describe a collaborative environment created to support distributed evaluation of a complex system architecture. The approach couples an interactive architecture browser with collaborative walkthroughs of an evolving architectural representation. The collaborative architecture browser was created to facilitate involvement of project stakeholders from geographically dispersed, heterogeneous organizations. The paper provides a rationale for the approach, describes the system created to facilitate distributed-collaborative architecture evaluation, and reports on evaluation results from an ongoing, very-large scale application integration project with the United States Marine Corps. The paper contributes to research on early architecture requirements engineering, architecture evaluation, and software tools to support distributed-collaborative design.


hawaii international conference on system sciences | 2007

Leveraging and Limiting Practical Drift in Emergency Response Planning

Steven R. Haynes; Wendy A. Schafer; John M. Carroll

A knowledge gap exists between what emergency responders know from their direct experience and what emergency planners know from analysis and reflection. The theory of practical drift suggests that shared understanding between planners and responders may break down as local response practice adapts and evolves with respect to static planning knowledge. In this article we discuss how practical drift impacts emergency preparedness and, using Schons theory of reflective practice, describe how design of collaborative technology might help mitigate this knowledge disparity. We draw on two field studies, one national, and one at the local level, to illustrate dimensions of the problem space


Journal of the Association for Information Science and Technology | 2005

Optimizing anti-terrorism resource allocation

Steven R. Haynes; Thomas George Kannampallil; Lawrence L. Larson; Nitesh Garg

Since spring of 2002 we have been working on a methodology, decision model, and cognitive support system to aid with effective allocation of anti-terrorism (AT) resources at Marine Corps installations. The work has so far been focused on the military domain, but the model and the software tools developed to implement it are generalizable to a range of commercial and public-sector settings including industrial parks, corporate campuses, and civic facilities. The approach suggests that anti-terrorism decision makers determine mitigation project allocations using measures of facility priority and mitigation project utility as inputs to the allocation algorithm. The three-part hybrid resource allocation model presented here uses multi-criteria decision-making techniques to assess facility (e.g., building, hangar) priorities, a utility function to calculate anti-terrorism project mitigation values (e.g., protective glazing, wall coatings, and stand-off barriers) and optimization techniques to determine resource allocations across multiple, competing AT mitigation projects. The model has been realized in a cognitive support system developed as a set of loosely coupled Web services. The approach, model, and cognitive support system have been evaluated using the cognitive walkthrough method with prospective system users in the field. In this paper we describe the domain, the problem space, the decision model, the cognitive support system and summary results of early model and system evaluations.


human factors in computing systems | 2009

It's what it's in: evaluating the usability of large-scale integrated systems

Steven R. Haynes

Todays systems are often composed of many heterogeneous, distributed components including computing and communications infrastructure, other hardware devices, and system and application software. Evaluating the usability of these systems is difficult, especially in the early stages of development when their use cannot be observed in context. While many different evaluation methods have been proposed for evaluating stand-alone technologies, evaluating very large-scale integrated systems requires techniques appropriate both for individual components, and the whole of the human-computing context being designed. Results from the case study reported here suggest that the usability of any individual application is highly determined by its integration with other applications in the distributed system. Modern evaluation methods need to account for this integration in both their perspective and the measures they use.

Collaboration


Dive into the Steven R. Haynes's collaboration.

Top Co-Authors

Avatar

Frank E. Ritter

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

John M. Carroll

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Mark A. Cohen

Lock Haven University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Thomas George Kannampallil

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Amie L. Skattebo

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Isaac G. Councill

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jonathan A. Singel

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Lawrence L. Larson

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Mary Beth Rosson

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Nitesh Garg

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge