Anders Bruun
Aalborg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anders Bruun.
human factors in computing systems | 2009
Anders Bruun; Peter Gull; Lene Hofmeister; Jan Stage
Remote asynchronous usability testing is characterized by both a spatial and temporal separation of users and evaluators. This has the potential both to reduce practical problems with securing user attendance and to allow direct involvement of users in usability testing. In this paper, we report from an empirical study where we systematically compared three methods for remote asynchronous usability testing: user-reported critical incidents, forum-based online reporting and discussion, and diary-based longitudinal user reporting. In addition, conventional laboratory-based think-aloud testing was included as a benchmark for the remote methods. The results show that each remote asynchronous method supports identification of a considerable number of usability problems. Although this is only about half of the problems identified with the conventional method, it requires significantly less time. This makes remote asynchronous methods an appealing possibility for usability testing in many software projects.
Behaviour & Information Technology | 2014
Anders Bruun; Jan Stage
Usability evaluations provide software development teams with insights on the degree to which a software application enables a user to achieve his/her goals, how fast these goals can be achieved, how easy it is to learn and how satisfactory it is in use. Although usability evaluations are crucial in the process of developing software systems with a high level of usability, their use is still limited in the context of small software development companies. Several approaches have been proposed to support software development practitioners (SWPs) in conducting usability evaluations and this paper presents two in-depth empirical studies of supporting SWPs by training them to become barefoot usability evaluators. Findings show that the SWPs after 30 hours of training obtained considerable abilities in identifying usability problems and that this approach revealed a high level of downstream utility. Results also show that the SWPs created relaxed conditions for the test users when acting as test monitors but experienced problems with making users think aloud. Considering the quality of problem descriptions, we found that the SWPs were better at providing clear and precise problem descriptions than at describing the impact, cause, user actions and providing data support for observations.
nordic conference on human-computer interaction | 2010
Anders Bruun
Software companies focusing on Usability Engineering face two major challenges, the first being the sheer lack of usability specialists leading to missing competences in the industry and the second, which regards small companies suffering from the constraint of low budgets, thus not being able to fund usability specialists or comprehensive consultancy. Training of non-usability personnel in critical usability engineering methods has the potential of easing these challenges. It is, however, unknown how much and what kind of research that has been committed to novice training in UE methods. This paper presents a comprehensive literature study of research conducted in this area, where 129 papers are analyzed in terms of research focus, empirical basis, types of training participants and training costs. Findings show a need for further empirical research regarding long term effects of training, training costs and training in user based evaluation methods.
Journal of Systems and Software | 2015
Anders Bruun; Jan Stage
New approaches to reduce obstacles.Obstacles: resource constraints, limited understanding and resistance.Barefoot evaluations: reduction of limited understanding and resistance.Crowdsourcing evaluations: reduction of resource requirements. Usability evaluations provide software development teams with insights on the degree to which software applications enable users to achieve their goals, how fast these goals can be achieved, how easy an application is to learn and how satisfactory it is in use. Although such evaluations are crucial in the process of developing software systems with a high level of usability, their use is still limited in small and medium-sized software development companies. Many of these companies are e.g. unable to allocate the resources that are needed to conduct a full-fledged usability evaluation in accordance with a conventional approach.This paper presents and assesses two new approaches to overcome usability evaluation obstacles: a barefoot approach where software development practitioners are trained to drive usability evaluations; and a crowdsourcing approach where end users are given minimalist training to enable them to drive usability evaluations. We have evaluated how these approaches can reduce obstacles related to limited understanding, resistance and resource constraints. We found that these methods are complementary and highly relevant for software companies experiencing these obstacles. The barefoot approach is particularly suitable for reducing obstacles related to limited understanding and resistance while the crowdsourcing approach is cost-effective.
human factors in computing systems | 2012
Anders Bruun; Jan Stage
Remote asynchronous usability testing involves users directly in reporting usability problems. Most studies of this approach employ predefined tasks to ensure that users experience specific aspects of the system, whereas other studies use no task assignments. Yet the effect of using predefined tasks is still to be uncovered. There is also limited research on instructions for users in identifying usability problems. This paper reports from a comparative study of the effect of task assignments and instruction types on the problems identified in remote asynchronous usability testing of a website for information retrieval, involving 53 prospective users. The results show that users solving predefined tasks identified significantly more usability problems with a significantly higher level of agreement than those working on their own authentic tasks. Moreover, users that were instructed by means of examples of usability problems identified significantly more usability problems than those who received a conceptual definition of usability problems.
international conference on human-computer interaction | 2015
Anders Bruun; Simon Ahm
User experience (UX) is typically measured retrospectively through subjective questionnaire ratings, yet we know little of how well these retrospective ratings reflect concurrent experiences of an entire event. UX entails a broad range of dimensions of which human emotion is considered to be crucial. This paper presents an empirical study of the discrepancy between concurrent and retrospective ratings of emotions. We induced two experimental conditions of varying pleasantness. Findings show the existence of a significant discrepancy between retrospective and concurrent ratings of emotions. In the most unpleasant condition we found retrospective ratings to be significantly overestimated compared to concurrent ratings. In the most pleasant condition we found retrospective ratings to correlate with the highest and final peaks of emotional arousal. This indicates that we cannot always rely on typical retrospective UX assessments to reflect concurrent experiences. Consequently, we discuss alternative methods of assessing UX, which have considerable implications for practice.
international conference on human-computer interaction | 2015
Anders Bruun; Jan Stage
Think-aloud is a de facto standard in user-based usability evaluation to verbalize what a user is experiencing. Despite its qualities, it has been argued that thinking aloud affects the task solving process. This paper reports from an empirical study of the effect of three think-aloud protocols on the identified usability problems. The three protocols were traditional, active listening and coaching. The study involved 43 test subjects distributed on the three think-aloud conditions and a silent control condition in a between-subject design. The results show that the three think-aloud protocols facilitated identification of the double number of usability problems compared to the silent condition, while the problems identified by the three think-aloud protocol were comparable. Our results do not support the common emphasis on the Coaching protocol, while we have seen that the Traditional protocol performs surprisingly well.
International Journal of Human-computer Interaction | 2016
Anders Bruun; Kenneth Eberhardt Jensen; Dianna Hjorth Kristensen; Jesper Kjeldskov
ABSTRACT In the past decade, there has been increasing interest in studying tabletop technologies in HCI. Using the Gartners Hype Cycle as an analytical framework, this article presents developments in tabletop research within the last decade. The objective is to determine the level of maturity of tabletop technologies with respect to the research foci and the extent to which tabletops have shown their worth in real world settings. We identify less studied topics in the current body of literature with the primary aim of evoking further discussions of the current and future research challenges. We analyzed 542 research publications and categorized these according to eight types of research foci. Findings show that only 3% of all studies are conducted in natural settings, i.e. there is a clear tendency to emphasize laboratory evaluations of tabletop technology. Also, very few studies demonstrate relative benefits of tabletops over other technologies in collaborative settings (1%). We argue for a need to increase emphasis on understanding real-world use and impact rather than developing new tabletop technologies.
international conference on human-computer interaction | 2013
Fulvio Lizano; Maria Marta Sandoval; Anders Bruun; Jan Stage
Several emerging countries experience increasing software development activities. With the purpose of provide useful feedback on possible courses of action for increasing application of usability evaluation in such countries, this paper explores the status of usability evaluation in a digitally emerging country. Our aim is to identifying common characteristics or behavioral patterns that could be compared with digitally advanced countries. We used an online survey answered by 26 software development organizations, which gave a snapshot of the application of usability evaluation in these organizations. We found many similarities with advanced countries, several completely new obstacles more connected with software development matters and a relatively positive improvement in the lack of “usability culture”. These findings suggest good conditions to improve conduction of usability evaluations in digitally emerging countries.
human factors in computing systems | 2016
Anders Bruun; Effie Lai-Chong Law; Matthias Heintz; Lana H.A. Alkly
Frustration is used as a criterion for identifying usability problems (UPs) and for rating their severity in a few of the existing severity scales, but it is not operationalized. No research has systematically examined how frustration varies with the severity of UPs. We aimed to address these issues with a hybrid approach, using Self-Assessment Manikin, comments elicited with Cued-Recall Debrief, galvanic skin responses (GSR) and gaze data. Two empirical studies involving a search task with a website known to have UPs were conducted to substantiate findings and improve on the methodological framework, which could facilitate usability evaluation practice. Results showed no correlation between GSR peaks and severity ratings, but GSR peaks were correlated with frustration scores -- a metric we developed. The Peak-End rule was partially verified. The problematic evaluator effect was the limitation as it confounded the severity ratings of UPs. Future work is aimed to control this effect and to develop a multifaceted severity scale.