Erik Frøkjær
University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erik Frøkjær.
human factors in computing systems | 2000
Erik Frøkjær; Morten Hertzum; Kasper Hornbæk
Usability comprises the aspects effectiveness, efficiency, and satisfaction. The correlations between these aspects are not well understood for complex tasks. We present data from an experiment where 87 subjects solved 20 information retrieval tasks concerning programming problems. The correlation between efficiency, as indicated by task completion time, and effectiveness, as indicated by quality of solution, was negligible. Generally, the correlations among the usability aspects depend in a complex way on the application domain, the users experience, and the use context. Going through three years of CHI Proceedings, we find that 11 out of 19 experimental studies involving complex tasks account for only one or two aspects of usability. When these studies make claims concerning overall usability, they rely on risky assumptions about correlations between usability aspects. Unless domain specific studies suggest otherwise, effectiveness, efficiency, and satisfaction should be considered independent aspect of usability and all be included in usability testing.
human factors in computing systems | 2001
Kasper Hornbæk; Erik Frøkjær
Reading of electronic documents is becoming increasingly important as more information is disseminated electronically. We present an experiment that compares the usability of a linear, a fisheye, and an overview+detail interface for electronic documents. Using these interfaces, 20 subjects wrote essays and answered questions about scientific documents. Essays written using the overview+detail interface received higher grades, while subjects using the fisheye interface read documents faster. However, subjects used more time to answer questions with the overview+detail interface. All but one subject preferred the overview+detail interface. The most common interface in practical use, the linear interface, is found to be inferior to the fisheye and overview+detail interfaces regarding most aspects of usability. We recommend using overview+detail interfaces for electronic documents, while fisheye interfaces mainly should be considered for time-critical tasks.
ACM Transactions on Computer-Human Interaction | 1996
Morten Hertzum; Erik Frøkjær
A user interface study concerning the usage effectiveness of selected retrieval modes was conducted using an experimental text retrieval system, TeSS, giving access to online documentation of certain programming tools. Four modes of TeSS were compared: (1) browsing, (2) conventional boolean retrieval, (3) boolean retrieval based on Venn diagrams, and (4) these three combined. Further, the modes of TeSS were compared to the use of printed manuals. The subjects observed were 87 computing new to them. In the experiment the use of printed manuals is faster and provides answers of higher quality than any of the electronic modes. Therefore, claims about the effectiveness of computer-based text retrieval have to by vary in situations where printed manuals are manageable to the user. Among the modes of TeSS, browsing is the fastest and the one causing the fewest operational errors. On the same two variables, time and operational errors, the Venn diagram mode performs better than conventional boolean retrieval. The combined mode scores worst on the objective performance measures; nonetheless nearly all subject prefer this mode. Concerning the interaction process, the subjects tend to manage the complexities of the information retrieval tasks by issuing series of simple commands and exploiting the interactive capabilities of TeSS. To characterize the dynamics of the interaction process two concepts are introduced; threads and sequences of tactics. Threads in a query sequence describes the continuity during retrieval. Sequences of tactics concern the combined mode and describe how different retrieval modes succeed each other as the retrieval process evolves.
human factors in computing systems | 2005
Kasper Hornbæk; Erik Frøkjær
Usability problems predicted by evaluation techniques are useful input to systems development; it is uncertain whether redesign proposals aimed at alleviating those problems are likewise useful. We present a study of how developers of a large web application assess usability problems and redesign proposals as input to their systems development. Problems and redesign proposals were generated by 43 evaluators using an inspection technique and think aloud testing. Developers assessed redesign proposals to have higher utility in their work than usability problems. In interviews they explained how redesign proposals gave them new ideas for tackling well known problems. Redesign proposals were also seen as constructive and concrete input. Few usability problems were new to developers, but the problems supported prioritizing ongoing development of the application and taking design decisions. No developers, however, wanted to receive only problems or redesigns. We suggest developing and using redesign proposals as an integral part of usability evaluation.
International Journal of Human-computer Interaction | 2011
Alan Woolrych; Kasper Hornbæk; Erik Frøkjær; Gilbert Cockton
To better support usability practice, most usability research focuses on evaluation methods. New ideas in usability research are mostly proposed as new evaluation methods. Many publications describe experiments that compare methods. Comparisons may indicate that some methods have important deficiencies, and thus often advise usability practitioners to prefer a specific method in a particular situation. An expectation persists in human–computer interaction (HCI) that results about evaluation methods should be the standard “unit of contribution” rather than favoring larger units (e.g., usability work as a whole) or smaller ones (e.g., the impact of specific aspects of a method). This article argues that these foci on comparisons and method innovations ignore the reality that usability evaluation methods are loose incomplete collections of resources, which successful practitioners configure, adapt, and complement to match specific project circumstances. Through a review of existing research on methods and resources, resources associated with specific evaluation methods, and ones that can complement existing methods, or be used separately, are identified. Next, a generic classification scheme for evaluation resources is developed, and the scheme is extended with project specific resources that impact the effective use of methods. With these reviews and analyses in place, implications for research, teaching, and practice are derived. Throughout, the article draws on culinary analogies. A recipe is nothing without its ingredients, and just as the quality of what is cooked reflects the quality of its ingredients, so too does the quality of usability work reflect the quality of resources as configured and combined. A method, like a recipe, is at best a guide to action for those adopting approaches to usability that are new to them. As with culinary dishes, HCI needs to focus more on what gets cooked, and how it gets cooked, and not just on how recipes suggest that it could be cooked.
Interacting with Computers | 2008
Kasper Hornbæk; Erik Frøkjær
Matching of usability problem descriptions consists of determining which problem descriptions are similar and which are not. In most comparisons of evaluation methods matching helps determine the overlap among methods and among evaluators. However, matching has received scant attention in usability research and may be fundamentally unreliable. We compare how 52 novice evaluators match the same set of problem descriptions from three think aloud studies. For matching the problem descriptions the evaluators use either (a) the similarity of solutions to the problems, (b) a prioritization effort for the owner of the application tested, (c) a model proposed by Lavery and colleagues [Lavery, D., Cockton, G., Atkinson, M.P., 1997. Comparison of evaluation methods using structured usability problem reports. Behaviour and Information Technology, 16 (4/5), 246-266], or (d) the User Action Framework [Andre, T.S., Hartson, H.R., Belz, S.M., McCreary, F.A., 2001. The user action framework: a reliable foundation for usability engineering support tools. International Journal of Human-Computer Studies, 54 (1), 107-136]. The resulting matches are different, both with respect to the number of problems grouped or identified as unique, and with respect to the content of the problem descriptions that were matched. Evaluators report different concerns and foci of attention when using the techniques. We illustrate how these differences among techniques might adversely influence the reliability of findings in usability research, and discuss some remedies.
Interacting with Computers | 2008
Tobias Uldall-Espersen; Erik Frøkjær; Kasper Hornbæk
Analyzing usability improvement processes as they take place in real-life organizations is necessary to understand the practice of usability work. This paper describes a case study where the usability of an information system is improved and a relationship between the improvements and the evaluation efforts is established. Results show that evaluation techniques complemented each other by suggesting different kinds of usability improvement. Among the techniques applied, a combination of questionnaires and Metaphors of Human Thinking (MOT) showed the largest mean impact and MOT produced the largest number of impacts. Logging of real-life use of the system over 6 months indicated six aspects of improved usability, where significant differences among evaluation techniques were found. Concerning five of the six aspects Think Aloud evaluations and the above-mentioned combination of questionnaire and MOT performed equally well, and better than MOT. Based on the evaluations 40 redesign proposals were developed and 30 of these were implemented. Four of the implemented redesigns where considered especially important. These evolved with inspiration from multiple evaluations and were informed by stakeholders with different kinds of expertise. Our results suggest that practitioners should not rely on isolated evaluations. Instead complementing techniques should be combined, and people with different expertise should be involved.
human factors in computing systems | 2008
Kasper Hornbæk; Erik Frøkjær
The utility and impact of a usability evaluation depend on how well its results align with the business goals of the system under evaluation. However, how to achieve such alignment is not well understood. We propose a simple technique that requires active consideration of a systems business goals in planning and reporting evaluations. The technique is tested in an experiment with 44 novice evaluators using think aloud testing. The evaluators considering business goals report fewer usability problems compared to evaluators that did not use the technique. The company commissioning the evaluation, however, assesses those problems 30-42% higher on four dimensions of utility. We discuss how the findings may generalize to usability professionals, and how the technique may be used in realistic usability evaluations. More generally, we discuss how our results illustrate one of a variety of ways in which business goals and other facets of a systems context may enter into usability evaluations.
international conference on human computer interaction | 2007
Tobias Uldall-Espersen; Erik Frøkjær
Usability is a key issue when developing software, but how to integrate usability work and software development continues to be a problem, which the stakeholders must face. This study aims at developing a more coherent and realistic understanding of the problem based on 14 interviews in three case studies. The results indicate that usability during software development has to be considered with both a user interface focus and an organizational focus. Especially techniques to support the uncovering of organizational usability are lacking in both human computer interaction and software engineering. Further, the continued engagement of stakeholders, who carry the vision about the purpose of change, stands out as a critical factor for the realization of project goals.
Journal of Medical Internet Research | 2017
Lise Lauritsen; Louise Bjørkholt Andersen; Emilia Olsson; Stine Rauff Søndergaard; Lasse Benn Nørregaard; Philip Kaare Løventoft; Signe Dunker Svendsen; Erik Frøkjær; Hans Mørch Jensen; Ida Hageman; Lars Vedel Kessing; Klaus Martiny
Background Patients suffering from depression have a high risk of relapse and readmission in the weeks following discharge from inpatient wards. Electronic self-monitoring systems that offer patient-communication features are now available to offer daily support to patients, but the usability, acceptability, and adherence to these systems has only been sparsely investigated. Objective We aim to test the usability, acceptability, adherence, and clinical outcome of a newly developed computer-based electronic self-assessment system (the Daybuilder system) in patients suffering from depression, in the period from discharge until commencing outpatient treatment in the Intensive Outpatient Unit for Affective Disorders. Methods Patients suffering from unipolar major depression that were referred from inpatient wards to an intensive outpatient unit were included in this study before their discharge, and were followed for four weeks. User satisfaction was assessed using semiqualitative questionnaires and the System Usability Scale (SUS). Patients were interviewed at baseline and at endpoint with the Hamilton depression rating scale (HAM-D17), the Major Depression Inventory (MDI), and the 5-item World Health Organization Well-Being Index (WHO-5). In this four-week period patients used the Daybuilder system to self-monitor mood, sleep, activity, and medication adherence on a daily basis. The system displayed a graphical representation of the data that was simultaneously displayed to patients and clinicians. Patients were phoned weekly to discuss their data entries. The primary outcomes were usability, acceptability, and adherence to the system. The secondary outcomes were changes in: the electronically self-assessed mood, sleep, and activity scores; and scores from the HAM-D17, MDI, and WHO-5 scales. Results In total, 76% of enrolled patients (34/45) completed the four-week study. Five patients were readmitted due to relapse. The 34 patients that completed the study entered data for mood on 93.8% of the days (872/930), sleep on 89.8% of the days (835/930), activity on 85.6% of the days (796/930), and medication on 88.0 % of the days (818/930). SUS scores were 86.2 (standard deviation [SD] 9.7) and 79% of the patients (27/34) found that the system lived up to their expectations. A significant improvement in depression severity was found on the HAM-D17 from 18.0 (SD 6.5) to 13.3 (SD 7.3; P<.01), on the MDI from 27.1 (SD 13.1) to 22.1 (SD 12.7; P=.006), and in quality of life on the WHO-5 from 31.3 (SD 22.9) to 43.4 (SD 22.1; P<.001) scales, but not on self-assessed mood (P=.08). Mood and sleep parameters were highly variable from day-to-day. Sleep-offset was significantly delayed from baseline, averaging 48 minutes (standard error 12 minutes; P<.001). Furthermore, when estimating delay of sleep-onset (with sleep quality included in the model) during the study period, this showed a significant negative effect on mood (P=.03) Conclusions The Daybuilder systems performed well technically, and patients were satisfied with the system and had high adherence to self-assessments. The dropout rate and the gradual delay in sleep emphasize the need for continued clinical support for these patients, especially when considering sleep guidance.