Iain Connell
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Iain Connell.
acm/ieee joint conference on digital libraries | 2004
Ann Blandford; Suzette Keith; Iain Connell; Helen Edwards
There are two main kinds of approach to considering usability of any system: empirical and analytical. Empirical techniques involve testing systems with users, whereas analytical techniques involve usability personnel assessing systems using established theories and methods. We report here on a set of studies in which four different techniques were applied to various digital libraries, focusing on the strengths, limitations and scope of each approach. Two of the techniques, heuristic evaluation and cognitive walkthrough, were applied in text-book fashion, because there was no obvious way to contextualize them to the digital libraries (DL) domain. For the third, claims analysis, it was possible to develop a set of reusable scenarios and personas that relate the approach specifically to DL development. The fourth technique, CASSM, relates explicitly to the DL domain by combining empirical data with an analytical approach. We have found that heuristic evaluation and cognitive walkthrough only address superficial aspects of interface design (but are good for that), whereas claims analysis and CASSM can help identify deeper conceptual difficulties (but demand greater skill of the analyst). However, none fit seamlessly with existing digital library development practices, highlighting an important area for further work to support improved usability.
Human-Computer Interaction | 2008
Ann Blandford; Joanne K. Hyde; Thomas R. G. Green; Iain Connell
ABSTRACT Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to a robotic arm interface, and the findings were systematically compared against video data of the arm in use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as user experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three “home-grown” methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method.
In: Faulkner, X and Finlay, J and Detienne, F, (eds.) (Proceedings) 16th British-Human-Computer-Interact-Group Annual Conference/European-Usability-Professionals-Association. (pp. pp. 139-156). SPRINGER-VERLAG LONDON LTD (2002) | 2002
Ann Blandford; B. L. William Wong; Iain Connell; Thomas R. G. Green
A novel usability evaluation technique, Ontological Sketch Modelling (OSM), was applied to the analysis of systems used within a complex work setting, namely emergency medical dispatch. OSM focuses on the structure of the domain in question and the devices which are applied to that domain, in order to reason about the quality of fit between the two. This analysis shows how OSM can be used to identify misfits between domain (here incidents, ambulance calls and real-time call processing by ambulance service staff) and device (the computer aided dispatch system) in real work settings. We show how OSM can aid additional reasoning about the way in which a new or proposed computer system can both support and enhance existing work structures. The analysis presented here also yields important insights into both the still-developing OSM and the structure of emergency medical dispatch systems.
international conference on human-computer interaction | 2004
Iain Connell; Thomas R. G. Green; Ann Blandford
Ontological Sketch Modelling (OSM) is a novel approach to usability evaluation that concentrates on both the user’s conceptual models of the domain and ‘working practices’, and the conceptual models built into a device or a work-system. Analysing the degree of fit between these models can reveal potential problems in learning and use that are not revealed by existing HCI approaches. We show how OSM can identify such potential misfits between user and system. We also describe how an OSM analysis can be edited, conventionalized and viewed in tabular form, thereby allowing automatic highlighting of user-system misfits. Illustrative examples are a typical drawing application and a digital music library.
Behaviour & Information Technology | 2004
Iain Connell; Ann Blandford; Thomas R. G. Green
We focus on the ability of two analytical usability evaluation methods (UEMs), namely CASSM (Concept-based Analysis for Surface and Structural Misfits) and Cognitive Walkthrough, to identify usability issues underlying the use made of two London Underground ticket vending machines. By setting both sets of issues against the observed interactions with the machines, we assess the similarities and differences between the issues depicted by the two methods. In so doing we de-emphasise the mainly quantitative approach which is typical of the comparative UEM literature. However, by accounting for the likely consequences of the issues in behavioural terms, we reduced the proportion of issues which were anticipated but not observed (the false positives), compared with that achieved by other UEM studies. We assess these results in terms of the limitations of problem count as a measure of UEM effectiveness. We also discuss the likely trade-offs between field studies and laboratory testing.
EHCI-DSVIS'04 Proceedings of the 2004 international conference on Engineering Human Computer Interaction and Interactive Systems | 2004
Ann Blandford; Thomas R. G. Green; Iain Connell
Many of the difficulties users experience when working with interactive systems arise from misfits between the users conceptualisation of the domain and device with which they are working and the conceptualisation implemented within those systems. We report an analytical technique called CASSM (Concept-based Analysis for Surface and Structural Misfits) in which such misfits can be formally represented to assist in understanding, describing and reasoning about them. CASSM draws on the framework of Cognitive Dimensions (CDs) in which many types of misfit were classified and presented descriptively, with illustrative examples. CASSM allows precise definitions of many of the CDs, expressed in terms of entities, attributes, actions and relationships. These definitions have been implemented in Cassata, a tool for automated analysis of misfits, which we introduce and describe in some detail.
Ergonomics | 1998
Iain Connell
An error analysis was performed on the three ticket vending machines installed at London underground and overground train stations. A brief analytic inspection, resulting in a set of predicted errors, was followed by lengthy empirical observations of successes, failures and errors occurring during machine use. There were two observational phases, 5 years apart. Comparisons were made between the patterns of error-making on the three machines, using error categories derived from the initial analysis. It was found that these comparisons were sufficient to account for most of the between-machine and between-phase differences, although some unattributed errors remained. It was also found that much of the observed pattern of error-making had been predicted by the initial inspection, and it is suggested that, for relatively simple interfaces such as these, the method (Dialogue Error Analysis) is sufficient to identify and prioritize most problems that will occur in use. Attempt was also made to relate the observ...
international conference on human-computer interaction | 1999
Iain Connell; Nicholas Hammond
international conference on human-computer interaction | 1999
Iain Connell; Nick Hammond
Archive | 2004
Ann Blandford; Iain Connell; Thomas R. G. Green