H. Rex Hartson
Virginia Tech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by H. Rex Hartson.
International Journal of Human-computer Interaction | 2001
H. Rex Hartson; Terence S. Andre; Robert C. Williges
The current variety of alternative approaches to usability evaluation methods (UEMs) designed to assess and improve usability in software systems is offset by a general lack of understanding of the capabilities and limitations of each. Practitioners need to know which methods are more effective and in what ways and for what purposes. However, UEMs cannot be evaluated and compared reliably because of the lack of standard criteria for comparison. In this article, we present a practical discussion of factors, comparison criteria, and UEM performance measures useful in studies comparing UEMs. In demonstrating the importance of developing appropriate UEM evaluation criteria, we offer operational definitions and possible measures of UEM performance. We highlight specific challenges that researchers and practitioners face in comparing UEMs and provide a point of departure for further discussion and refinement of the principles and techniques used to approach UEM evaluation and comparison.
Behaviour & Information Technology | 2003
H. Rex Hartson
In reaction to Normans (1999) essay on misuse of the term affordance in human-computer interaction literature, this article is a concept paper affirming the importance of this powerful concept, reinforcing Normans distinctions of terminology, and expanding on the usefulness of the concepts in terms of their application to interaction design and evaluation. We define and use four complementary types of affordance in the context of interaction design and evaluation: cognitive affordance, physical affordance, sensory affordance, and functional affordance. The terms cognitive affordance (Normans perceived affordance) and physical affordance (Normans real affordance) refer to parallel and equally important usability concepts for interaction design, to which sensory affordance plays a supporting role. We argue that the concept of physical affordance carries a mandatory component of utility or purposeful action (functional affordance). Finally, we provide guidelines to help designers think about how these four kinds of affordance work together naturally in contextualized HCI design or evaluation.
human factors in computing systems | 1996
H. Rex Hartson; José C. Castillo; John T. Kelso; Wayne C. Neale
Traditional user interface evaluation usually is conducted in a laboratory where users are observed directly by evaluators. However, the remote and distributed location of users on the network precludes the opportunity for direct observation in usability testing. Further, the network itself and the remote work setting have become intrinsic parts of usage patterns, difficult to reproduce in a laboratory setting, and developers often have limited access to representative users for usability testing in the laboratory. In all of these cases, the cost of transporting users or developers to remote locations can be prohibitive. These barriers have led us to consider methods for remote usability evaluation wherein the evaluator, performing observation and analysis, is separated in space and/or time from the user. The network itself serves as a bridge to take interface evaluation to a broad range of networked users, in their natural work settings. Several types of remote evaluation are defined and described in terms of their advantages and disadvantages to usability testing. The initial results of two case studies show potential for remote evaluation. Remote evaluation using video teleconferencing uses the network as a mechanism to transport video data in real time, so that the observer can evaluate user interfaces in remote locations as they are being used. Semi-instrumented remote evaluation is based on critical incident gathering by the user within the normal work context. Additionally, both methods can take advantage of automat ing data collection through questionnaires and instrumented applications.
ACM Transactions on Information Systems | 1990
H. Rex Hartson; Antonio C. Siochi; Deborah Hix
Many existing interface representation techniques, especially those associated with UIMS, are constructional and focused on interface implementation, and therefore do not adequately support a user-centered focus. But it is in the behavioral domain of the user that interface designers and evaluators do their work. We are seeking to complement constructional methods by providing a tool-supported technique capable of specifying the behavioral aspects of an interactive system–the tasks and the actions a user performs to accomplish those tasks. In particular, this paper is a practical introduction to use of the User Action Notation (UAN), a task- and user-oriented notation for behavioral representation of asynchronous, direct manipulation interface designs. Interfaces are specified in UAN as a quasihierarchy of asynchronous tasks. At the lower levels, user actions are associated with feedback and system state changes. The notation makes use of visually onomatopoeic symbols and is simple enough to read with little instruction. UAN is being used by growing numbers of interface developers and researchers. In addition to its design role, current research is investigating how UAN can support production and maintenance of code and documentation.
Human-Computer Interaction | 1992
H. Rex Hartson; Philip D. Gray
The need for communication among a multiplicity of cooperating roles in user interface development translates into the need for a common set of interface design representation techniques. The important difference between design of the interaction part of the interface and design of the interface software calls for representation techniques with a behavioral view - a view that focuses on user interaction rather than on the software. The User Action Notation (UAN) is a user- and task-oriented notation that describes physical (and other) behavior of the user and interface as they perform a task together. The primary abstraction of the UAN is a user task. The work reported here addresses the need to identify temporal relationships within user task descriptions and to express explicitly and precisely how designers view temporal relationships among those tasks. Drawing on simple temporal concepts such as events in time and preceding and overlapping of time intervals, we identify basic temporal relationships among tasks: sequence, waiting, repeated disjunction, order independence, interruptibility, one-way interleavability, mutual interleavability, and concurrency. The UAN temporal relations, through the notion of modal logic, offer an explicit and precise representation of the specific kinds of temporal behavior that can occur in asynchronous user interaction without the need to detail all cases that might result.
Journal of Systems and Software | 1998
H. Rex Hartson
Abstract Methodology, theory, and practice in the field of Human–Computer Interaction (HCI) all share the goal of producing interactive software that can be used efficiently, effectively, safely, and with satisfaction. HCI is cross-disciplinary in its conduct and multidisciplinary in its roots. The central concept of HCI is usability, ease of use plus usefulness. Achieving good usability requires attention to both product and development process, particularly for the user interaction design, which should serve as requirements for the user interface software component. This paper reviews some of the theory and modeling supporting the practice of HCI, development life cycles and activities, and much of the practice that constitutes “usability engineering”. Future application areas of interest in HCI include new interaction styles, virtual environments, the World Wide Web, information visualization, and wearable computing.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2001
Terence S. Andre; H. Rex Hartson; Steven M. Belz; Faith McCreary
Although various methods exist for performing usability evaluation, they lack a systematic framework for guiding and structuring the assessment and reporting activities. Consequently, analysis and reporting of usability data are ad hoc and do not live up to their potential in cost effectiveness, and usability engineering support tools are not well integrated. We developed the User Action Framework, a structured knowledge base of usability concepts and issues, as a framework on which to build a broad suite of usability engineering support tools. The User Action Framework helps to guide the development of each tool and to integrate the set of tools in the practitioners working environment. An important characteristic of the User Action Framework is its own reliability in term of consistent use by practitioners. Consistent understanding and reporting of the underlying causes of usability problems are requirements for cost-effective analysis and redesign. Thus, high reliability in terms of agreement by users on what the User Action Framework means and how it is used is essential for its role as a common foundation for the tools. Here we describe how we achieved high reliability in the User Action Framework, and we support the claim with strongly positive results of a summative reliability study conducted to measure agreement among 10 usability experts in classifying 15 different usability problems. Reliability data from the User Action Framework are also compared to data collected from nine of the same usability experts using a classic heuristic evaluation technique.
advanced visual interfaces | 1998
H. Rex Hartson; José C. Castillo
Although existing lab-based formative evaluation is frequently and effectively applied to improving usability of software user interfaces, it has limitations that have led to the concept of remote usability evaluation. Perhaps the most significant impetus for remote usability evaluation methods is the need for a project team to continue formative evaluation downstream, after deployment.The usual kinds of alpha and beta testing do not qualify as formative usability evaluation because they do not yield detailed data observed during usage and associated closely with specific task performance. Critical incident identification is arguably the single most important source of this kind of data. Consequently, we developed and evaluated a cost-effective remote usability evaluation method, based on real users self-reporting critical incidents encountered in real tasks performed in their normal working environments. Results show that users with only brief training can identify, report, and rate the severity level of their own critical incidents.
human factors in computing systems | 1998
José C. Castillo; H. Rex Hartson; Deborah Hix
In this paper, we briefly introduce the user-reported critical incident method (originally called semi-instrumented critical incident gathering [3]) for remote usability evaluation, and describe results and lessons learned in its development and use. Our findings indicate that users can, in fact, identify and report their own critical incidents.
International Journal on Digital Libraries | 2004
H. Rex Hartson; Priya Shivakumar; Manuel A. Pérez-Quiñones
This paper reports a case study about lessons learned and usability issues encountered in a usability inspection of a digital library system called the Networked Computer Science Technical Reference Library (NCSTRL). Using a co-discovery technique with a team of three expert usability inspectors (the authors), we performed a usability inspection driven by a broad set of anticipated user tasks. We found many good design features in NCSTRL, but the primary result of a usability inspection is a list of usability problems as candidates for fixing. The resulting problems are organized by usability problem type and by system functionality, with emphasis on the details of problems specific to digital library functions. The resulting usability problem list was used to illustrate a cost/importance analysis technique that trades off importance to fix against cost to fix. The problems are sorted by the ratio of importance to cost, producing a priority ranking for resolution.