Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anna M. Wichansky is active.

Publication


Featured researches published by Anna M. Wichansky.


eye tracking research & application | 2002

Eye tracking in web search tasks: design implications

Joseph H. Goldberg; Mark J. Stimson; Marion Lewenstein; Neil G. Scott; Anna M. Wichansky

An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlets body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.


Archive | 2003

Eye Tracking in Usability Evaluation: A Practitioner's Guide

Joseph H. Goldberg; Anna M. Wichansky

Publisher Summary This chapter provides a practical guide for either the software usability engineer who considers the benefits of eye tracking or the eye tracking specialist who considers software usability evaluation as an application. Usability evaluation is defined rather loosely by industry as any of several applied techniques where users interact with a product, system, or service and some behavioral data are collected. Usability goals are often stipulated as criteria, and an attempt is made to use test participants similar to the target-market users. The chapter discusses methodological issues first in usability evaluation and then in the eye-tracking realm. An integrated knowledge of both of these areas is beneficial for the experimenter who conducts eye tracking as part of a usability evaluation. Within each of these areas, major issues are presented in the chapter by a rhetorical questioning style. By presenting the usability evaluation, the practical use of an eye-tracking methodology is placed into a proper and realistic perspective.


human factors in computing systems | 1998

Designing user interfaces for television

Dale A. Herigstad; Anna M. Wichansky

In this paper, we describe a tutorial to enable CHI participants to design more effective user interfaces (UIs) for interactive television (ITV) and World Wide Web (WWW) applications used on televisions (TVs).


Archive | 2003

Eye Tracking in Usability Evaluation

Joseph H. Goldberg; Anna M. Wichansky

Publisher Summary This chapter provides a practical guide for either the software usability engineer who considers the benefits of eye tracking or the eye tracking specialist who considers software usability evaluation as an application. Usability evaluation is defined rather loosely by industry as any of several applied techniques where users interact with a product, system, or service and some behavioral data are collected. Usability goals are often stipulated as criteria, and an attempt is made to use test participants similar to the target-market users. The chapter discusses methodological issues first in usability evaluation and then in the eye-tracking realm. An integrated knowledge of both of these areas is beneficial for the experimenter who conducts eye tracking as part of a usability evaluation. Within each of these areas, major issues are presented in the chapter by a rhetorical questioning style. By presenting the usability evaluation, the practical use of an eye-tracking methodology is placed into a proper and realistic perspective.


human factors in computing systems | 1992

HCI standards on trial: you be the jury

Jaclyn R. Schrier; Evelyn L. Williams; Kevin S. MacDonell; Larry A. Peterson; Paulien F. Strijland; Anna M. Wichansky; James R. Williams

INTRODUCTION The European Committee for Standardization (CEN) directive 90/270/EEC of May 29, 1990, WauireS all employers within the EEC to purchase and use only those software products that comply with a series of user interface standards including ISO 9241, Ergonoma”crequirements for office work with visual display term”nals (VDTS). The CEN directive takes effect on January 1, 1993,and allows European employers five years to make sure that all computer products in use comply with the appropriate standards. Although many previous user-oriented standards have concerned hardware aspects of computer systems, the standards in question legislate software interface design requirements. All software products marketed within the EEC will need to comply with these standards, regardless of where the software products were developed. This panel will discuss these standards and how they will influence the work of the CHI community. The session will focus on a standard devoted to Menus (ISO 9241-14), a component common to most user interfaces, This particular document falls under the CEN directive, has already passed the fit vote of the 1S0 member nations, and is considered well on its way to becoming art official CEN requirement,


international conference of design, user experience, and usability | 2011

ISO 25062 Usability Test Planning for a Large Enterprise Applications Suite

Sean Rice; Jatin Thaker; Anna M. Wichansky

In setting out to perform summative usability testing on a new suite of more than 100 enterprise software applications for 400 different user roles, we faced a number of challenges in terms of staffing, scheduling, and resources. ISO 25062 provided a valuable organizing framework to plan, scope, and implement our testing effort. In this paper, we discuss the considerations and steps that we took in test planning and management, including our prioritization strategy and creation of an automated data collection system to minimize impact on staffing resources and the usability engineering workload.


international conference on universal access in human computer interaction | 2009

Customer Boards as Vehicles of Change in Enterprise Software User Experience

Anna M. Wichansky

Traditional user-centered design processes do not leverage long-term customer-vendor relationships as a means of improving product usability. While designing a next-generation applications software suite, Oracle reached out to its most-involved customers for creative solutions to common user-experience issues. The mission of the Oracle Usability Advisory Board was to take enterprise software to a whole new level in usability. The board consisted of executives and senior managers primarily in information technology positions in different types of organizations. The board identified three major areas where it wanted to improve usability: consistency and design, integration and performance, and Web 2.0. Through various working groups, the board has developed tools for obtaining customer feedback on product usability, online seminars on technical topics, and outreach mechanisms to other customers. The board has effectively become Oracles partner in ensuring product understanding and use, thus setting the stage for improved usability in the next-generation product.


international conference on hci in business | 2015

Creating Greater Synergy Between HCI Academia and Practice

Fiona Fui-Hoon Nah; Dennis F. Galletta; Melinda M. Knight; James R. Lewis; John Pruitt; Gavriel Salvendy; Hong Sheng; Anna M. Wichansky

This paper presents perspectives from both academia and practice on how both groups can collaborate and work together to create synergy in the development and advancement of human-computer interaction (HCI). Issues and challenges are highlighted, success cases are offered as examples, and suggestions are provided to further such collaborations.


human factors in computing systems | 2010

Sig: branding the changing enterprise - impact of mergers & acquisitions on user experience organizations

Janaki Kumar; Daniel Rosenberg; Michael Arent; Anna M. Wichansky; Madhuri Kolhatkar; Roman Longoria; Bob Hendrich; Arnold M. Lund

Mergers and acquisitions are becoming increasingly common in the enterprise software world. For example, SAP acquired Business Objects, Oracle acquired PeopleSoft and CA acquired Cassatt in recent times. While this is a business expansion strategy for the acquiring company, it presents a challenge for UX professionals in both the acquiring and acquired companies, who are responsible for branding the look and feel of the newly combined business entity. This SIG examines the design, technical and cultural challenges facing a UX practitioner from the acquiring as well as acquired companys perspectives. We will explore possible best practice solutions that can help other UX professionals facing similar challenges.


Interactions | 2007

Working with standards organizations

Anna M. Wichansky

fortune to be part of a significant joint effort between industry and government to create standards for usability testing. This was called the Industry Usability Reporting (IUsR) Project, and it was run by the National Institutes for Standards and Technology (NIST). My interest in the project stemmed from a request from Boeing, a major enterprise-software customer of Oracle. Boeing had productivity goals in place for use of software; it wanted users to be able to come up to speed on commercial off-the-shelf software quickly, without excessive learning curves or help-desk support. Before it would buy the software, the company wanted to get usability test results from software vendors. But of course it wanted results that would be comparable in terms of methodology and reporting, making it easier to compare vendors. Individual conversations with major software vendors were favorable to the idea of reporting such results to customer companies, under the right nondisclosure agreements. This idea led to the formation of a steering committee, including NIST members and industry representatives, and the organization of the first IUsR workshop in March 1998. The meeting was held at NIST in Gaithersburg, Maryland, and there were 25 key attendees representing a number of large software vendors, customer organizations, consultants, and academics in the usability engineering discipline. Several of us were invited to make presentations concerning what usability test data we actually collected on products and what we could propose as common denominators among our methods that would allow a common industry reporting format to be developed. As a result of this meeting, working groups were formed to deal with general management issues, methodology, results and product descriptions, and pilot-test planning. As a member of the methodology working group, my main focus was to identify reliable and valid ways of conducting and reporting on usability testing on which we could all agree. Although this initially sounded like a tall order, it was amazing how much consensus we had in that initial meeting about how testing was done among the large vendors and the types of data we would be willing to provide customers. Some of the items people felt strongly about were: We should not be proscriptive about testing methods, but rather concentrate on the reporting format to emphasize the types of information customers want in order to make procurement decisions. • There should be empirical data collected with users. Checklists and other analytical techniques conducted by vendors were of lesser value to customers than data collected from actual users. There should be some quantitative, human performance data and some qualitative, subjective data collected. Customers were interested not only in how well people performed with the software, but also in how well they liked it. There should be a minimum number of users tested (based on the literature, we recommended eight per user type). There should be a template for reporting purposes that was accessible to procurement and executive audiences as well as usability professionals in the customer organizations. We should recruit pairs of vendor and customer organizations to perform trials of the new reporting format. Following our initial meeting, NIST promptly set up a website where we could all communicate about the progress of our working groups. It also helped organize conference calls and an email distribution list for updates and discussions. In the first year, an informational white paper and the basic guideline for the common industry reporting format were written. A docu•

Collaboration


Dive into the Anna M. Wichansky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fiona Fui-Hoon Nah

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hong Sheng

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge