Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Stage is active.

Publication


Featured researches published by Jan Stage.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2004

New Techniques for Usability Evaluation of Mobile Systems

Jesper Kjeldskov; Jan Stage

Abstract Usability evaluation of systems for mobile computers and devices is an emerging area of research. This paper presents and evaluates six techniques for evaluating the usability of mobile computer systems in laboratory settings. The purpose of these techniques is to facilitate systematic data collection in a controlled environment and support the identification of usability problems that are experienced in mobile use. The proposed techniques involve various aspects of physical motion combined with either needs for navigation in physical space or division of attention. The six techniques are evaluated through two usability experiments where walking in a pedestrian street was used as a reference. Each of the proposed techniques had some similarities to testing in the pedestrian street, but none of them turned out to be completely comparable to that form of field-evaluation. Seating the test subjects at a table supported identification of significantly more usability problems than any of the other proposed techniques. However a large number of the additional problems identified using this technique were categorized as cosmetic. When increasing the amount of physical activity, the test subjects also experienced a significantly increased subjective workload.


Management Information Systems Quarterly | 1996

Controlling prototype development through risk analysis

Richard Baskerville; Jan Stage

This article presents a new approach to the management of evolutionary prototyping projects. The prototyping approach to systems development emphasizes learning and facilitates meaningful communication between systems developers and users. These benefits are important for rapid creation of flexible, usable information resources that are well-tuned to present and future business needs. The main unsolved problem in prototyping is the difficulty in controlling such projects. This problem severely limits the range of practical projects in which prototyping can be used. The new approach suggested in this article uses an explicit risk mitigation model and management process that energizes and enhances the value of prototyping in technology delivery. An action research effort validates this risk analysis approach as one that focuses management attention on consequences and priorities inherent in a prototyping situation. This approach enables appropriate risk resolution strategies to be placed in effect before the prototyping process breaks down. It facilitates consensus building through collaborative decision making and is consistent with a high degree of user involvement.


nordic conference on human-computer interaction | 2006

It's worth the hassle!: the added value of evaluating the usability of mobile systems in the field

Christian Nielsen; Michael Toft Overgaard; Michael Bach Pedersen; Jan Stage; Sigge Stenild

The distinction between field and laboratory is classical in research methodology. In human-computer interaction, and in usability evaluation in particular, it has been a controversial topic for several years. The advent of mobile devices has revived this topic. Empirical studies that compare evaluations in the two settings are beginning to appear, but they provide very different results. This paper presents results from an experimental comparison of a field-based and a lab-based usability evaluation of a mobile system. The two evaluations were conducted in exactly the same way. The conclusion is that it is definitely worth the hassle to conduct usability evaluations in the field. In the field-based evaluation we identified significantly more usability problems and this setting revealed problems with interaction style and cognitive load that were not identified in the laboratory.


human factors in computing systems | 2007

What happened to remote usability testing?: an empirical study of three methods

Morten Sieker Andreasen; Henrik Villemann Nielsen; Simon Ormholt Schrøder; Jan Stage

The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic empirical comparison of three methods for remote usability testing and a conventional laboratory-based think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote asynchronous conditions, where the test monitor and the test subjects are separated both spatially and temporally. The results show that the remote synchronous method is virtually equivalent to the conventional method. Thereby, it has the potential to conveniently involve broader user groups in usability testing and support new development approaches. The asynchronous methods are considerably more time-consuming for the test subjects and identify fewer usability problems, yet they may still be worthwhile.


nordic conference on human-computer interaction | 2008

Obstacles to usability evaluation in practice: a survey of software development organizations

Jakob Otkjær Bak; Kim Nguyen; Peter Risgaard; Jan Stage

This paper reports from a combined questionnaire survey and interview study of obstacles for deploying usability evaluation in software development organizations. It was conducted in a limited geographical area. The purpose of the questionnaire survey was to determine whether software development organizations in that area were evaluating the usability of their software and to identify key obstacles. It revealed that 29 of 39 software development organizations conducted some form of usability evaluation. The purpose of the interview study was to gain more insight into the obstacles that were expressed. It involved 10 of the 39 software development organizations. Our results show, that the understanding of usability evaluation is a major obstacle. Furthermore, the two most significant obstacles were resource demands and the mindset of developers. These obstacles were not only an obstacle for more organizations to deploy usability evaluation, but also a concern for the software development organizations, that had deployed usability evaluations in their development process.


Information Technology & People | 1990

The principle of limited reduction in software design

Lars Mathiassen; Jan Stage

Compares experimental (eg. prototyping) and analytical (eg. specifying) approaches in systems design. Derives ′The Principle of Limited Reduction′. Defines this as: “Relying on an analytical mode of operation to reduce complexity introduces new sources of uncertainty requiring experimental countermeasures; relying on an experimental mode of operation to reduce complexity introduces new sources of uncertainty requiring analytical countermeasures”. Concludes that a mixed approach is best, but warns that this is as yet (1992) hypothetical.


human factors in computing systems | 2009

Let your users do the testing: a comparison of three remote asynchronous usability testing methods

Anders Bruun; Peter Gull; Lene Hofmeister; Jan Stage

Remote asynchronous usability testing is characterized by both a spatial and temporal separation of users and evaluators. This has the potential both to reduce practical problems with securing user attendance and to allow direct involvement of users in usability testing. In this paper, we report from an empirical study where we systematically compared three methods for remote asynchronous usability testing: user-reported critical incidents, forum-based online reporting and discussion, and diary-based longitudinal user reporting. In addition, conventional laboratory-based think-aloud testing was included as a benchmark for the remote methods. The results show that each remote asynchronous method supports identification of a considerable number of usability problems. Although this is only about half of the problems identified with the conventional method, it requires significantly less time. This makes remote asynchronous methods an appealing possibility for usability testing in many software projects.


International Journal of Medical Informatics | 2010

A Longitudinal Study of Usability in Health Care - Does Time Heal?

Jesper Kjeldskov; Mikael B. Skov; Jan Stage

We report from a longitudinal laboratory-based usability evaluation of a health care information system. The purpose of the study was to inquire into the nature of usability problems experienced by novice and expert users, and to see to what extend usability problems of a health care information system may or may not disappear over time, as the nurses get more familiar with it-if time heals poor design? As our method for studying this, we conducted a longitudinal study with two key studies. A usability evaluation was conducted with novice users when an electronic patient record system was being deployed in a large hospital. After the nurses had used the system in their daily work for 15 months, we repeated the evaluation. Our results show that time does not heal. Although some problems were not experiences as severe, they still remained after 1 year of extensive use. On the basis of our findings, we discuss implications for evaluating usability in health care.


Accommodating Emergent Work Practices | 2001

Accommodating Emergent Work Practices: Ethnographic Choice of Method Fragments

Richard Baskerville; Jan Stage

Development methodology is a key issue for research in information system development. It is often assumed that methodologies and practice are closely related, but there are few attempts to justify this assumption. Much of the literature on development methodologies is normative and conceptual; empirical work into the efficacy of these methods is lacking. In fact, empirical evidence indicates that we should instead try to understand the system development process as being emergent, so even if methodologies appear to the observer as structure, they are only transient regularities in work practices that are constantly shifting form. Even if it is claimed that a project employs a certain methodology, it is usually not used as prescribed.


International Journal of Human-computer Interaction | 2006

The Interplay Between Usability Evaluation and User Interaction Design

Kasper Hornbæk; Jan Stage

Usability evaluations inform user interaction design in a relevant manner, and successful user interaction design can be attained through usability evaluation. These are obvious conjectures about a mature usability engineering discipline. Unfortunately, research and practice suggest that, in reality, the interplay between usability evaluation and user interaction design is significantly more complex and too often far from optimal. This article provides a simple model of the interplay between usability evaluation and user interaction design that captures their main relationships. From the model, what is seen as the key challenges in improving the interplay between evaluation and design is outlined. The intention is to create a background against which the remainder of this special issue, containing5researcharticlespresenting empirical data on the interplay between design and evaluation and a commentary, can be contrasted.

Collaboration


Dive into the Jan Stage's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge