Melody Y. Ivory
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Melody Y. Ivory.
ACM Computing Surveys | 2001
Melody Y. Ivory; Marti A. Hearst
Usability evaluation is an increasingly important part of the user interface design process. However, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. This article presents an extensive survey of usability evaluation methods, organized according to a new taxonomy that emphasizes the role of automation. The survey analyzes existing techniques, identifies which aspects of usability evaluation automation are likely to be of use in future research, and suggests new ways to expand existing approaches to better support usability evaluation.
human factors in computing systems | 2001
Melody Y. Ivory; Rashmi R. Sinha; Marti A. Hearst
A quantitative analysis of a large collection of expert-rated web sites reveals that page-level metrics can accurately predict if a site will be highly rated. The analysis also provides empirical evidence that important metrics, including page composition, page formatting, and overall page characteristics, differ among web site categories such as education, community, living, and finance. These results provide an empirical foundation for web site design guidelines and also suggest which metrics can be most important for evaluation via user studies.
human factors in computing systems | 2002
Melody Y. Ivory; Marti A. Hearst
We are creating an interactive tool to help non-professional web site builders create high quality designs. We have previously reported that quantitative measures of web page structure can predict whether a site will be highly or poorly rated by experts, with accuracies ranging from 67--80%. In this paper we extend that work in several ways. First, we compute a much larger set of measures (157 versus 11), over a much larger collection of pages (5300 vs. 1900), achieving much higher overall accuracy (94% on average) when contrasting good, average, and poor pages. Second, we introduce new classes of measures that can make assessments at the site level and according to page type (home page, content page, etc.). Finally, we create statistical profiles of good sites, and apply them to an existing design, showing how that design can be changed to better match high-quality designs
conference on computers and accessibility | 2005
Richard E. Ladner; Melody Y. Ivory; Rajesh P. N. Rao; Sheryl Burgstahler; Dan Comden; Sangyun Hahn; Matthew J. Renzelmann; Satria Krisnandi; Mahalakshmi Ramasamy; Andrew D. Martin; Amelia Lacenski; Stuart Olsen; Dmitri Groce
Access to graphical images (bar charts, diagrams, line graphs, etc.) that are in a tactile form (representation through which content can be accessed by touch) is inadequate for students who are blind and take mathematics, science, and engineering courses. We describe our analysis of the current work practices of tactile graphics specialists who create tactile forms of graphical images. We propose automated means by which to improve the efficiency of current work practices.We describe the implementation of various components of this new automated process, which includes image classification, segmentation, simplification, and layout. We summarize our development of the tactile graphics assistant, which will enable tactile graphics specialists to be more efficient in creating tactile graphics both in batches and individually. We describe our unique team of researchers, practitioners, and student consultants who are blind, all of whom are needed to successfully develop this new way of translating tactile graphics.
human factors in computing systems | 2000
Melody Y. Ivory
Web site usability is even more critical as the number of sites grows exponentially and the number of users increases dramatically. We describe a new automated methodology and tool, Web TANGO, being developed to allow designers to explore alternative designs of information-centric web sites prior to implementation.
Archive | 2003
Melody Y. Ivory
Usability testing with human participants is a fundamental evaluation method [Nielsen, 1993; Shneiderman, 1998]. It provides an evaluator with direct information about how people use web sites and the problems that they encounter with the sites being tested. During usability testing, participants use the site or a prototype to complete a predetermined set of tasks, while the tester or software records the results of the participants’ work. The evaluator then uses these results to determine how well the site supports users’ task completion and to derive other measures, such as the number of errors encountered and task completion time.
Archive | 2003
Melody Y. Ivory
Automated web site evaluation methods have many potential benefits, including reducing the costs of non-automated methods, aiding in comparisons between alternative designs, and improving consistency in evaluation results. However, we do not think that the methods have as much influence on web site designs as they could have. For instance, our survey of 169 web practioners revealed that only fifteen percent of them report that they always use such tools in their work practices (see Chapters 1 and 10).
Archive | 2003
Melody Y. Ivory
This chapter provides the foundation for our discussion in Part II of automated web site evaluation methods. We discuss frameworks for describing evaluation methods in general, and automated evaluation methods in particular. We point out the limitations of existing frameworks and describe an expanded taxonomy for classifying evaluation automation. We summarize the application of this taxonomy to 84 methods, and in the process, we characterize the state-of-the-art in automated web site evaluation. We include approaches that were developed after our original survey of 58 methods [Ivory and Hearst, 2001; Ivory, 2001], but suggest that the reader refer to these prior surveys for a more comprehensive discussion of automated and non-automated methods for evaluating both graphical and web interfaces. Chapters 3–7 provide methodology details, including summative assessments of the techniques.
Archive | 2003
Melody Y. Ivory
There are numerous design guidelines and study results to provide guidance on the way to design web sites so that they are usable and accessible. However, practitioners experience difficulty in applying guidelines, at least in the format in which they are typically presented; they are often discussed vaguely and sometimes conflict with one another. For instance, there is a wide gap between the recommendation “make the site consistent” and its application. Automated evaluation tools, guideline review tools in particular, were developed to assist practitioners with conforming to design guidelines. However, we have demonstrated throughout Part III that the tools have not evolved to the point where they adequately simplify this process. How might we help practitioners to better adhere to guidelines in the interim?
Archive | 2003
Melody Y. Ivory
Our analysis of automated evaluation tools in Chapter 11 revealed that the tools do not cover the full range of user abilities. There was also very little consistency in their assessments for the same web page. But how do the tools perform in practice for the user abilities and other aspects that they do support?