Kasper Hornbæk
University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kasper Hornbæk.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2006
Kasper Hornbæk
How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research.
human factors in computing systems | 2000
Erik Frøkjær; Morten Hertzum; Kasper Hornbæk
Usability comprises the aspects effectiveness, efficiency, and satisfaction. The correlations between these aspects are not well understood for complex tasks. We present data from an experiment where 87 subjects solved 20 information retrieval tasks concerning programming problems. The correlation between efficiency, as indicated by task completion time, and effectiveness, as indicated by quality of solution, was negligible. Generally, the correlations among the usability aspects depend in a complex way on the application domain, the users experience, and the use context. Going through three years of CHI Proceedings, we find that 11 out of 19 experimental studies involving complex tasks account for only one or two aspects of usability. When these studies make claims concerning overall usability, they rely on risky assumptions about correlations between usability aspects. Unless domain specific studies suggest otherwise, effectiveness, efficiency, and satisfaction should be considered independent aspect of usability and all be included in usability testing.
human factors in computing systems | 2011
Javier A. Bargas-Avila; Kasper Hornbæk
This paper reviews how empirical research on User Experience (UX) is conducted. It integrates products, dimensions of experience, and methodologies across a systematically selected sample of 51 publications from 2005-2009, reporting a total of 66 empirical studies. Results show a shift in the products and use contexts that are studied, from work towards leisure, from controlled tasks towards open use situations, and from desktop computing towards consumer products and art. Context of use and anticipated use, often named key factors of UX, are rarely researched. Emotions, enjoyment and aesthetics are the most frequently assessed dimensions. The methodologies used are mostly qualitative, and known from traditional usability studies, though constructive methods with unclear validity are being developed and used. Many studies use self-developed questionnaires without providing items or statistical validations. We discuss underexplored research questions and potential improvements of UX research.
human factors in computing systems | 2012
Majken Kirkegaard Rasmussen; Esben Warming Pedersen; Marianne Graves Petersen; Kasper Hornbæk
Shape change is increasingly used in physical user interfaces, both as input and output. Yet, the progress made and the key research questions for shape-changing interfaces are rarely analyzed systematically. We review a sample of existing work on shape-changing interfaces to address these shortcomings. We identify eight types of shape that are transformed in various ways to serve both functional and hedonic design purposes. Interaction with shape-changing interfaces is simple and rarely merges input and output. Three questions are discussed based on the review: (a) which design purposes may shape-changing interfaces be used for, (b) which parts of the design space are not well understood, and (c) why studying user experience with shape-changing interfaces is important.
human factors in computing systems | 2001
Kasper Hornbæk; Erik Frøkjær
Reading of electronic documents is becoming increasingly important as more information is disseminated electronically. We present an experiment that compares the usability of a linear, a fisheye, and an overview+detail interface for electronic documents. Using these interfaces, 20 subjects wrote essays and answered questions about scientific documents. Essays written using the overview+detail interface received higher grades, while subjects using the fisheye interface read documents faster. However, subjects used more time to answer questions with the overview+detail interface. All but one subject preferred the overview+detail interface. The most common interface in practical use, the linear interface, is found to be inferior to the fisheye and overview+detail interfaces regarding most aspects of usability. We recommend using overview+detail interfaces for electronic documents, while fisheye interfaces mainly should be considered for time-critical tasks.
designing interactive systems | 2006
Mie Nørgaard; Kasper Hornbæk
Think-aloud testing is a widely employed usability evaluation method, yet its use in practice is rarely studied. We report an explorative study of 14 think-aloud sessions, the audio recordings of which were examined in detail. The study shows that immediate analysis of observations made in the think-aloud sessions is done only sporadically, if at all. When testing, evaluators seem to seek confirmation of problems that they are already aware of. During testing, evaluators often ask users about their expectations and about hypothetical situations, rather than about experienced problems. In addition, evaluators learn much about the usability of the tested system but little about its utility. The study shows how practical realities rarely discussed in the literature on usability evaluation influence sessions. We discuss implications for usability researchers and professionals, including techniques for fast-paced analysis and tools for capturing observations during sessions.
user interface software and technology | 2004
Jun Fujima; Aran Lunzer; Kasper Hornbæk; Yuzuru Tanaka
Many applications provide a form-like interface for requesting information: the user fills in some fields, submits the form, and the application presents corresponding results. Such a procedure becomes burdensome if (1) the user must submit many different requests, for example in pursuing a trial-and-error search, (2) results from one application are to be used as inputs for another, requiring the user to transfer them by hand, or (3) the user wants to compare results, but only the results from one request can be seen at a time. We describe how users can reduce this burden by creating custom interfaces using three mechanisms: clipping of input and result elements from existing applications to form cells on a spreadsheet; connecting these cells using formulas, thus enabling result transfer between applications; and cloning cells so that multiple requests can be handled side by side. We demonstrate a prototype of these mechanisms, initially specialised for handling Web applications, and show how it lets users build new interfaces to suit their individual needs.
human factors in computing systems | 2005
Kasper Hornbæk; Erik Frøkjær
Usability problems predicted by evaluation techniques are useful input to systems development; it is uncertain whether redesign proposals aimed at alleviating those problems are likewise useful. We present a study of how developers of a large web application assess usability problems and redesign proposals as input to their systems development. Problems and redesign proposals were generated by 43 evaluators using an inspection technique and think aloud testing. Developers assessed redesign proposals to have higher utility in their work than usability problems. In interviews they explained how redesign proposals gave them new ideas for tackling well known problems. Redesign proposals were also seen as constructive and concrete input. Few usability problems were new to developers, but the problems supported prioritizing ongoing development of the application and taking design decisions. No developers, however, wanted to receive only problems or redesigns. We suggest developing and using redesign proposals as an integral part of usability evaluation.
human factors in computing systems | 2015
Yvonne Jansen; Pierre Dragicevic; Petra Isenberg; Jason Alexander; Abhijit Karnik; Johan Kildal; Sriram Subramanian; Kasper Hornbæk
Physical representations of data have existed for thousands of years. Yet it is now that advances in digital fabrication, actuated tangible interfaces, and shape-changing displays are spurring an emerging area of research that we call Data Physicalization. It aims to help people explore, understand, and communicate data using computer-supported physical data representations. We call these representations physicalizations, analogously to visualizations -- their purely visual counterpart. In this article, we go beyond the focused research questions addressed so far by delineating the research area, synthesizing its open challenges and laying out a research agenda.
Behaviour & Information Technology | 2010
Kasper Hornbæk
Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work on UEMs. The dogmas include using inadequate procedures and measures for assessment, focusing on win–lose outcomes, holding simplistic models of how usability evaluators work, concentrating on evaluation rather than on design and working from the assumption that usability problems are real. We discuss research approaches that may help move beyond the dogmas. In particular, we emphasise detailed studies of evaluation processes, assessments of the impact of UEMs on design carried out in real-world systems development and analyses of how UEMs may be combined.