Rolf Molich
Technical University of Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rolf Molich.
human factors in computing systems | 1990
Jakob Nielsen; Rolf Molich
Heuristic evaluation is an informal method of usability analysis where a number of evaluators are presented with an interface design and asked to comment on it. Four experiments showed that individual evaluators were mostly quite bad at doing such heuristic evaluations and that they only found between 20 and 51% of the usability problems in the interfaces they evaluated. On the other hand, we could aggregate the evaluations from several evaluators to a single evaluation and such aggregates do rather well, even when they consist of only three to five people.
Communications of The ACM | 1990
Rolf Molich; Jakob Nielsen
A survey of seventy-seven highly motivated industrial designers and programmers indicates that the identification of specific, potential problems in a human-computer dialogue design is difficult.
Behaviour & Information Technology | 2004
Rolf Molich; Meghan R. Ede; Klaus Kaasgaard; Barbara Karyukin
This paper reports on a study assessing the consistency of usability testing across organisations. Nine independent organisations evaluated the usability of the same website, Microsoft Hotmail. The results document a wide difference in selection and application of methodology, resources applied, and problems reported. The organizations reported 310 different usability problems. Only two problems were reported by six or more organizations, while 232 problems (75%) were uniquely reported, that is, no two teams reported the same problem. Some of the unique findings were classified as serious. Even the tasks used by most or all teams produced very different results – around 70% of the findings for each of these tasks were unique. Our main conclusion is that our simple assumption that we are all doing the same and getting the same results in a usability test is plainly wrong.
human factors in computing systems | 1999
Rolf Molich; Ann Damgaard Thomsen; Barbara Karyukina; Lars Schmidt; Meghan R. Ede; Wilma van Oel; Meeta Arcuri
Seven professional usability labs and one university student team have carried out independent, parallel usability tests of the same state-of-the-art, live, commercial web site. The web site used for the usability tests is www.hotmail.com, a major provider of free web-based e-mail. The panel will discuss similarities and differences in process, results and reporting.
Behaviour & Information Technology | 2008
Rolf Molich; Joseph S. Dumas
This paper reports on the approach and main results of CUE-4, the fourth in a series of Comparative Usability Evaluation studies. A total of 17 experienced professional teams independently evaluated the usability of the website for the Hotel Pennsylvania in New York. Nine teams used usability testing while eight teams used expert reviews. The CUE-4 results document a wide difference in resources applied and issues reported. The teams reported 340 different usability issues. Only nine of these issues were reported by more than half of the teams, while 205 issues (60%) were uniquely reported, that is, no two teams reported the same issue. A total of 61 of the 205 uniquely reported issues were classified as serious or critical problems. The study also shows that there was no practical difference between the results obtained from usability testing and expert reviews for the issues identified. It was not possible to prove the existence of either missed problems or false alarms in expert reviews. The paper further discusses quality measures for usability evaluation productivity.
Behaviour & Information Technology | 2014
Morten Hertzum; Rolf Molich; Niels Ebbe Jacobsen
Usability evaluation is essential to user-centred design; yet, evaluators who analyse the same usability test sessions have been found to identify substantially different sets of usability problems. We revisit this evaluator effect by having 19 experienced usability professionals analyse video-recorded test sessions with five users. Nine participants analysed moderated sessions; 10 participants analysed unmoderated sessions. For the moderated sessions, participants reported an average of 33% of the problems reported by all nine of these participants and 50% of the subset of problems reported as critical or serious by at least one participant. For the unmoderated sessions, the percentages were 32% and 40%. Thus, the evaluator effect was similar for moderated and unmoderated sessions, and it was substantial for the full set of problems and still present for the most severe problems. In addition, participants disagreed in their severity ratings. As much as 24% (moderated) and 30% (unmoderated) of the problems reported by multiple participants were rated as critical by one participant and minor by another. The majority of the participants perceived an evaluator effect when merging their individual findings into group evaluations. We discuss reasons for the evaluator effect and recommend ways of managing it.
human factors in computing systems | 2001
Rolf Molich; Brenda Laurel; Carolyn Snyder; Whitney Quesenbery; Chauncey E. Wilson
Users are human. As HCI professionals we must be sure that our fellow humans perceive their encounter with usability and design professionals as pleasant without sacrificing the accuracy of our results. There are guidelines produced by professional organizations like the APA and the ACM about how HCI professionals should behave. However, there are few examples from real life about how to translate this information into everyday behavior. This panel will discuss specific examples of HCI dilemmas that the panelists have faced in their daily work.
human factors in computing systems | 2003
Rolf Molich; Robin Jeffries
In this workshop we will try to obtain a better understanding of the strengths and weaknesses of the expert review and heuristic inspection methods. We will do this by comparing results of independent expert reviews, heuristic inspections and usability tests of the same state-of-the-art website carried out by participating expert usability professionals.
human factors in computing systems | 2004
Rolf Molich; Susan M. Dray; David A. Siegel
In this SIG experienced usability testers will share tips and tricks for practical international usability testing.
human factors in computing systems | 2008
Rolf Molich; Chauncey E. Wilson
In this SIG experienced usability test practitioners and HCI researchers will discuss common errors in usability test facilitation. Usability test facilitation is the actual encounter between a test participant and a facilitator from the moment the test participant arrives at the test location to the moment the test participant leaves the test location. The purpose of this SIG is to identify common approaches to usability test facilitation that work and do not work, and to come up with realistic suggestions for how to prevent typical problems.