Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chauncey E. Wilson is active.

Publication


Featured researches published by Chauncey E. Wilson.


Interactions | 2006

Brainstorming pitfalls and best practices

Chauncey E. Wilson

“Let’s get together and brainstorm!” You have probably heard this call to action many times. Group brainstorming seems like a simple undertaking—you get a group of people together, present a topic or problem, and then ask the group to generate as many ideas as possible. When you are done generating ideas, you apply a selection technique for deciding which ideas will be investigated further. The most basic principles for successful group brainstorming are [4]:


Interactions | 2006

Triangulation: the explicit use of multiple methods, measures, and approaches for determining core issues in product development

Chauncey E. Wilson

Triangulation is an approach to data collection and analysis that uses multiple methods, measures, or approaches to look for convergence on product requirements or problem areas. While the term “triangulation” may not trip off the tongues of HCI practitioners, we often employ triangulation, implicitly or explicitly, to bolster our recommendations and be more persuasive to our colleagues. Consider how convincing you might be if the results you obtain independently from usability tests, field interviews, and customer support data all indicate similar problems. This convergence of results across different data collection methods can help you convince a team to focus on core problems—the things that tend to emerge across methods. Triangulation can be used to determine core problems with a system or reduce the “inappropriate certainty” that sometimes comes when a single evaluation method or approach indicates that not much is wrong with a product. For example, if you run a single usability test and find that participants don’t have a serious problem, your product team may feel so confident that they think that they can forego further usability work. However, if you use multiple methods, say a large-scale customer survey and face-to-face interviews, in addition to a usability test, you might discover usability problems that were not evident in your usability test. Triangulating data from the test, survey, and interviews could help you convince the team that not all is right with the product as the usability test seemed to indicate (the inappropriate certainty to which I referred earlier).


human factors in computing systems | 2002

Usability in Practice: user experience lifecycle - evolution and revolution

Stephanie Rosenbaum; Chauncey E. Wilson; Timo Jokela; Janice Anne Rohn; Trixi B. Smith; Karel Vredenburg

The practice of usability and user-centered design must integrate with many other activities in the product development lifecycle. This integration requires political savvy, knowledge of a wide variety of methods, flexibility in using methods, inspiration, and innovation. The speakers and their colleagues have met these requirements and describe their experience fitting various methods into design and development efforts. This forum highlights their successes and setbacks.


human factors in computing systems | 2001

Ethics in HCI

Rolf Molich; Brenda Laurel; Carolyn Snyder; Whitney Quesenbery; Chauncey E. Wilson

Users are human. As HCI professionals we must be sure that our fellow humans perceive their encounter with usability and design professionals as pleasant without sacrificing the accuracy of our results. There are guidelines produced by professional organizations like the APA and the ACM about how HCI professionals should behave. However, there are few examples from real life about how to translate this information into everyday behavior. This panel will discuss specific examples of HCI dilemmas that the panelists have faced in their daily work.


Interactions | 2007

Taking usability practitioners to task

Chauncey E. Wilson

As usability practitioners and interaction designers, we often have to choose tasks for user-centered design (UCD) activities including storyboarding, paper and medium-fidelity prototyping, usability testing, and walkthroughs. Choosing tasks for various user-centered design activities is a critical activity and sometimes a moral issue for usability practitioners and product designers. It is critical, at least for complex products, because we often have to make usability and quality judgments based on evaluations that tap only a small set of possible tasks. It is a moral issue because our choice of tasks is a source of bias that could affect perceptions of the product during development and, in rare cases, result in harm to users. Following are two examples in which the choice of tasks for a UCD activity had a profound effect on the final product. Example 1. A consultant is asked to conduct a usability test of an e-commerce Web site. The entire product team and the senior management team will be coming to observe for a full day since the updates to the site will likely determine whether the company makes a profit over the next six months. The consultant creates a set of tasks for the usability test that represent “easy cases;” as a result, the participants in the test have almost no problems. The site goes live shortly after the usability test, and there are hundreds of complaints each week. An investigation reveals that the tasks were designed to make the site look good for the observers rather than give a realistic sense of how usable it would be for actual customers. Here there is an ethical dilemma—the consultant’s livelihood was going to be affected by the results of the test so he might have, consciously or unconsciously, used tasks that were easy so he could preserve a good relationship with the team. There is often a conflict of interest when a usability practitioner is asked to create tasks for a usability study. What is the lesson here? Have some colleagues who are not closely connected to your project review your tasks. That will guard against biases that may emerge from conflicts of interest. Example 2. A usability group conducted field interviews with a large group of users at different customer sites. The usability group extracted important and frequent tasks from the field data and used that data to design the tasks for usability evaluations of early working prototypes. They tested the prototype using tasks based on what they had observed with real customers. There were no major usability problems based on a large-scale (20 representative users) test of the alpha version that would go out to customers. Everyone was happy and felt quite good about the results of the test; they decided to ship the alpha version to seven of the most important customers. Shortly after the alpha version was installed, there were reports that customers found the system totally unusable because of performance—something that didn’t come up at all with some of the same tasks in the usability lab. What happened? The usability tasks were based on 50,000 rows of data, but the customers had databases with ten million rows of data. The tasks used in the usability test were based on what real users did, but not on the amount of data involved in the realworld task. The product had to be substantially altered before it went out for beta testing because the tasks in the usability test were not based on mega-databases. These two examples show how the choice of tasks can affect perceptions of product usability. There are obvious and not-so-obvious criteria that are important when choosing tasks for design and evaluation activities. Task frequency and criticality are perhaps the most common criteria for deciding what tasks to use; there are also more subtle criteria.


Interactions | 2007

Designing useful and usable questionnaires: you can't just "throw a questionnaire together"

Chauncey E. Wilson

Asking good questions and designing useful and usable questionnaires are core skills for usability practitioners. I often find myself disappointed by the poor quality of online, paper, and telephone questionnaires. Part of the problem might stem from a lack of training in questionnaire design—a complex topic—as well as the assumption that questionnaires are a quick and dirty method of data collection that can be thrown together. The reality is that questionnaire design is a complicated process that involves many, often conflicting, considerations [1, 2]. The design of solid questionnaires must consider various issues, including:


Interactions | 2007

Please listen to me!: or, how can usability practitioners be more persuasive?

Chauncey E. Wilson

When I began my career as a usability engineer in the early 1980s, a recurrent theme at gatherings of HCI and usability practitioners was “How do I get the product team (or ‘the developers’) to listen to my recommendations about how to make the product better?” That was more than 20 years ago, and yet the same question keeps coming up again and again. Frankly, I had a dream when I started my career that there would come a time (I was hoping it would occur by the year 2000) when usability, HCI, and user-experience practitioners would have an equal voice in the design of products. We have achieved somewhat more credibility and influence than we had two decades ago, but as a group, we still have an inferiority complex. While many job descriptions for usability positions refer to “strong communication and organizational skills,” few are explicit about knowledge of persuasive techniques. Perhaps that is too Machiavellian? It might be politically incorrect to ask what principles of marketing or social psychology can be applied to change the minds of recalcitrant colleagues and managers, but the topic is important. So I’m going to take the chance that what I say might be viewed as manipulative and describe some ideas, theories, and techniques for being more persuasive. Let me start with several principles from social psychology that can become part of your persuasion repertoire.


Interactions | 2007

The problem with usability problems: context is critical

Chauncey E. Wilson

A major goal for usability practitioners is to discover and eliminate usability problems from a product or service (without introducing new problems) within budget, time, and quality constraints. Making products more usable is laudable, but as a field, we have many rancorous debates about the definition of “usability problem.” I’ve been in meetings where tempers have flared over different views on what constitutes a usability problem. Why do we argue so much about something as fundamental as a definition? Many of the debates are the result of the contextual nature of usability—usability is not simply an absolute property of a product, it is the interaction of a product or service with a particular context of use [2]. Context of use can include: the types and frequencies of tasks; domain and product experience; the goals and characteristics of the users; the social, physical, and psychological environments; fatigue; safety; and many other factors. If you change the context of product use, what was a problem in one situation could become a delighter in another. My goal for this article is to prompt usability practitioners to explicitly consider the contextual factors that affect what we label a usability problem or non-problem. Let’s examine some of the contextual issues with usability problems.


Interactions | 2007

Inverse, reverse, and unfocused methods: variations on our standard tools of the trade

Chauncey E. Wilson

Most practitioners of user-centered design (UCD) have a repertoire of methods that they apply to the design and evaluation of products, for example, brainstorming, card sorting, storyboards, formative usability testing, and field interviews. While these general methods serve us well, there are lesser-known variations that complement the “standard” methods. For this end-of-year column, I will suggest some techniques that you can add to your UCD toolkit in 2008. Let’s begin by looking at variations on scenarios that can broaden your perspectives about how products can be used and misused.


human factors in computing systems | 2003

New tips and tricks for a better usability test

Rolf Molich; Chauncey E. Wilson

In this SIG, experienced usability testers will exchange tips and tricks for practical usability testing.

Collaboration


Dive into the Chauncey E. Wilson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rolf Molich

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Aaron Marcus

Aaron Marcus and Associates

View shared research outputs
Top Co-Authors

Avatar

Brenda Laurel

Art Center College of Design

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trixi B. Smith

Lansing Community College

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge