Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Judith Reitman Olson is active.

Publication


Featured researches published by Judith Reitman Olson.


international conference on human-computer interaction | 1988

Mental Models in Human-Computer Interaction

John M. Carroll; Judith Reitman Olson

2.1 Abstract Users of software systems acquire knowledge about the system and how to use it through experience, training, and imitation. Currently, there is a great deal of debate about exactly what users know about software. This knowledge may include one or more of the following: • simple rules that prescribe a sequence of actions that apply in certain conditions, • general methods that fit certain general situations and goals, • “mental models,” knowledge of the components of a system, their interconnection, and the processes that change the components; knowledge that forms the basis for users being able to construct reasonable actions; and explanations about why a set of actions is appropriate. Discovering what users know and how these different forms of knowledge fit together in learning and performance is important. It applies to the problem of designing systems and training programs so that the systems are easy to use and the learning is efficient. Research on the effects of different representations on ultimate performance is mixed. Research on exactly what users know is scattered. Analytical methods and techniques for representing what the user knows are sparse but growing. This paper reviews current work and through the review, identifies several important research needs: • Detail what kinds of mental representations people have of systems that allow them to behave appropriately in using the software. • Detail what a mental model would consist of and how a person would use it to decide what action to take next. • Produce evidence that people have and use mental models. • Determine the behaviors that would demonstrate a mental models form and the operations used on the model. • Explore alternative views of goal-directed representations (e.g., so-called “sequence/method representations”) and detail the behavior predicted from them. • Expand the types of mental representations that may exist to include those that may not be mechanistic, such as algebraic systems and visual systems. • Determine how people intermix different representations in producing behavior. • Explore how knowledge about systems is acquired. • Determine how individual differences have an impact on learning of and performance on systems. • Explore the design of training sequences for systems. • Provide systems designers with tools to help them develop systems that evoke “good” representations in users. • Expand the task domain of this research to include more complex software.


human factors in computing systems | 1989

Skilled financial planning: the cost of translating ideas into action

F. J. Lerch; Marilyn M. Mantei; Judith Reitman Olson

We use GOMS models to predict error rates and mental times for translating financial concepts into equations in two widely used interface representations. The first of these, common to spreadsheet packages, is characterized by non-mnemonic naming and absolute referencing of variables. The second, common to non-procedural command-driven software, is characterized by mnemonic naming conventions and relative referencing of variables. These predictions were tested in an experiment using experienced financial analysts. Although the interface that allows mnemonic and relative names (called keyword) takes longer overall, it produces seventy-five percent fewer simple errors and requires less mental effort. Given the overall serious cost of errors in financial models, we conclude that interfaces having the keyword representation are far superior.


Information & Management | 1995

Cognitive evaluation of system representation diagrams

Gerald L. Lohse; Daihwan Min; Judith Reitman Olson

We evaluate diagramming techniques for systems analysts and programmers from a cognitive perspective. This focuses on how people process information from system diagrams and how diagrams support the cognitive limitations of systems analysts and programmers. The paper increases awareness of the analysts need for different information views during the systems development process and provides steps for improving the comprehension and communication of diagrammatic information. The examples provide ways to develop better diagrams given current tools for their development. We hope that future diagramming tools will reflect the cognitive limitations of the analyst by actively highlighting and dynamically governing the flow of graphic information across multiple information processing views.


human factors in computing systems | 1988

Designing keybindings to be easy to learn and resistant to forgetting even when the set of commands is large

Neff Walker; Judith Reitman Olson

We formulated a set of rules for producing key-commands that are alternatives for activating commands with a mouse from a menu. Because software is getting increasingly complex, it was important that the rules cover a wide variety of commands. The rules combined verb-modifier-object order and mnemonic abbreviations for the words in each slot. Our keybindings were shown not only to cover a wide set, but to be far easier to learn than EMACs (a common keybinding set) and a more robust form with respect to negative interference from prior and post-learning of another set.


human factors in computing systems | 1985

Expanded design procedures for learnable, usable interfaces (panel session)

Judith Reitman Olson

Designers of interactive computer systems have begun to incorporate a number of good techniques in the design process to insure that the system will be easy to learn and easy to use. Though not all design projects use all the steps recommended, the steps are well known:Define the tasks the user has to perform, Know the capabilities of the user, Gather relevant hardware/software constraints, From guidelines, design a first prototype, Test the prototype with users, Iterate changes in the design and repeat the tests until the deadline is reached. In our experience designing a new interface, steps 1 and 4 were the ones that were the most difficult and step 5 was the one that took extra time to plan well. We had difficulty defining what would go into a new task, and from broad guidelines, we had to develop one specific implementation for our tasks. Furthermore, so that in each test we would learn something of value for future designs, we knew that we wanted to test pairs of prototypes that differed in only one feature. Choosing which single feature to alter in each pair required careful planning. In what follows, I describe each of these difficulties more fully and show how we approached each in our environment. Normally, a task is defined as a computer-based analog of an existing task, such as wordprocessing being the computer-based analog of typing. Since we had to build an interface for an entirely new task, we had to invent how the user would think about the task. We had to invent the objects on which the user would operate and then the actions that would be performed on those objects. We had to specify the mental representation in the absence of previous similar tasks. In our case, we were designing the interface for a communications manager to designate the path to be taken for routing 800-calls to their final destination as a function of time of day, day of week, holidays, percentage distribution, etc. From the large set of known formal representations of data, e.g. lists, pictures, tables, hierarchies, and networks, we found three that seemed to capture the information sufficient for our task. We found that a hierarchy (tree structure), a restricted programming language in which there were only IF-THEN-ELSEs and definitions, and a long form to be filled out with all possible ordered combinations of the desired features, were all sufficient representations. We then asked potential users in casual interviews which format they found easiest to understand. It was immediately clear even from a relatively small number of subjects that the tree representation was preferred. The second aspect of defining the task involved specifying what actions the user would take on this representation. Since in all interfaces, users have to move about, select an item to work on, enter information, delete information, and change modes (from data entry to command, typically), we looked for these kinds of actions in our task. The actions immediately fell into place, with commands being generated for moving about a tree, entering nodes and branches, etc. After gathering information on who the end users were and what hardware constraints we had, we designed our first prototype. This was our next most involved chore. Our broad guidelines said that we should:Present information on the computer in a representation as close as possible to the users mental representation. Minimize the long-term and short-term memory loads (e.g. make retrieval of commands and codes easy, give the user clues about where he or she is in a complicated procedure or data structure). Construct a dialog that holds to natural conversational conventions (e.g., make pauses predictable, acknowledge long delays, use English imperative structure in the command syntax). Our initial design on paper was fairly easy to construct. We followed that, however, with an important analysis step before we built our first prototype. For each part of the design, we constructed an alternative design that seemed to fit within the same constraints and within the guidelines. That is, we identified the essential components of our interface: the representation of the data, the organization of the command sector, the reminders, and the specific command implementations such as how to move around the data representation. For example, in the command sector there are alternative ways to arrange the commands for display: they could be grouped by similar function so that all “move” commands were clustered and all “entry” commands were clustered, etc, or they could be grouped into common sequences, such as those that people naturally follow in initially entering the nodes and branches of the tree structures. Once each component had an alternative, we debated the merits of each. Our first prototype, then, was the result of this first paper design plus the alterations that were generated by this analysis procedure. The next step entailed testing our design with real users. Since we wanted to test our prototypes so that we learned something useful for our next assignment, we chose to test two prototypes at a time. If we were to learn something from the test, then only one component could differ between the two prototypes. The difficulty arose in deciding which component was to be tested in each pair. For this task, we went back to our initial component-by-component debate about the prototype. For each of the components and its alternative, we scored the choice on three dimensions: That is, first, for some alternatives, the better choice was predictable. For example, displaying command names was known to be more helpful than not displaying them. Testing this alternative would not teach us very much. Second, we needed to choose some alternatives early, so that the developers could begin immediately with some preliminary work. For example, our developers needed to know early whether the data would be displayed as a form or a tree so they could set up appropriate data structures. And third, some alternatives would appear again in future design projects. For example, all projects require some way of moving about the data but few deal directly with trees. Knowledge gained now about the movement function would pay off in the future whereas how to display trees may not. Once we prioritized our alternatives on these dimensions, we were able to choose the alternative for the first prototype test. After the test, we found other ideas to incorporate in the next iteration, but went through the same analysis procedure, listing the components, debating alternatives, and prioritizing those to be tested in the next iteration. In summary, the procedure we followed in designing and testing our prototypes was standard in overall form, flowing from defining the task, user, and constraints; building prototypes; and testing them with users. We differed, however, in three of our steps. We spent important initial time considering the best task representation to display to the user. We analyzed the individual components of our first prototype, generating a design for actual implementation that was more defensibly good than our first paper design. And, in our iterative testing procedure, we selected pairs of prototypes for test, the pairs differing on only one component of the design. The component for testing was selected according to whether the test would teach us something, whether it was important to decide early in the development process, and whether the component would appear again in designs we encountered in the future. These expanded steps in the design process not only added to our confidence that our early design was reasonably good, but also gave us the data and theory with which to convince others, notably developers and project managers, of the merit of our design. And, the process taught us something of use for our next design project.


human factors in computing systems | 1985

Computer human factors in computer interface design (panel session)

Robert L. Mack; Thomas P. Moran; Judith Reitman Olson; Dennis Wixon; John Whiteside

Human factors psychologists contribute in many ways to improving human-computer interaction. One contribution involves evaluating existing or prototype systems, in order to assess usability and identify problems. Another involves contributing more directly to the design of systems in the first place: that is, not only evaluating systems but bringing to bear empirical methods and theoretical considerations that help specify what are plausible designs in the first place. The goal of this panel is to discuss four case studies emphasizing this role of cognitive human factors, and identify relevant methods and theoretical considerations. The panelists will present examples of prototypes or products to whose design they contributed, with the aim of characterizing the problem (or problems) they tried to solve, the approach to identifying a design solution for that problem, and evidence that the approach was useful. Robert Mack will discuss an editor prototype designed to get novices started doing meaningful work quickly and helping them to continue acquiring new skills, with virtually no explicit instruction. The prototype is being designed in large part by identifying key novice problems and expectations, and trying to design the interface to better accommodate these expectations. The first goal of getting novices started relatively quickly has been achieved but problems remain as novices try to acquire further text-editing skill. These problems — and solutions to them — are being identified through a process of iterative design and evaluation. Dennis Wixon will discuss implications for designing usable interfaces of the User-Derived-Interface project (Good, M., Whiteside, J., Wixon, D. and Jones, S., 1984). The project involved a simulation of a restricted natural language interface for an electronic mail system. The design process was driven by the behavioral goal of getting users started relatively quickly with little or no instruction or interface aids. Actual user interaction with the simulation coupled with iterative design and evaluation provided interface specifications. This prototype illustrates a number of techniques for bringing usability into the software engineering process. These presentations will discuss the role of empirical methods such as verbal protocol techniques for identifying user problems with existing computer systems (e.g., Lewis, 1982; Mack, Lewis & Carroll, 1983; Douglas & Moran, 1983), including variations aimed at identifying user expectations that may be able to guide design (e.g., Mack, 1984); interface simulations for studying user interactions again with the aim of letting user behavior guide interface design (e.g., Kelley, 1984; Good, Whiteside, Wixon & Jones, 1984), and iterative design and evaluation of interfaces, aimed at achieving behavioral goals (e.g., Carroll & Rosson, 1984; Gould & Lewis, 1983).


ACM Sigchi Bulletin | 1989

Analysis of the Cognition Involved in Spreadsheet Software Interaction (Abstract Only)

Judith Reitman Olson; Erik Nilsen

This paper analyzes details of the cognition involved when people use spreadsheet software, a task that is both a major microcomputer application and a cognitively intense task. This task is analyzed in terms of the GOMS model (Card, Moran, and Newell, 1983), both to test the generality of the model and to extend its set of parameters. We found that people using two seemingly similar spreadsheet applications, Lotus 1-2-3 and Multiplan, require very different amounts of time to accomplish the same tasks. Experienced users of Lotus 1-2-3 took far longer to complete the same four tasks than experienced Multiplan users did. Some of that additional time was found to be caused by the fact that Lotus 1-2-3 offers a choice to users of two general methods to use to enter formulas. Lotus requires that the user decide which to use. This decision takes time. And, when the users type in the values of the formula instead of using the cursor to point to the cell in which the values reside, they pause a long time before each such typing entry. Presumably they are scanning the screen and calculating the coordinates to type in during the pause. Again, these cognitive processes take time. In an analysis of a second task, that of adjusting the column width, there was substantial evidence that the performance changes when a method is repeated in close succession. This repetition affects the parameters that reflect the time it takes to retrieve command parts from memory. When the parameters for scanning decision and repetition were added to the keystroke analysis of our task, we found remarkable correspondence with the basic parameters from Card, Moran, and Newells original work: the keystroke times and mental preparation times from their original experiments were very close to the estimates of those same parameters in our tasks. However, in our analysis of the spreadsheet task, we expanded the parameter set in the keystroke model to account for performance in tasks that require substantial planning, scanning, and repetition.


Human-Computer Interaction | 1990

The growth of cognitive modeling in human-computer interaction since GOMS

Judith Reitman Olson; Gary M. Olson


Expert Systems | 1987

Extracting expertise from experts: Methods for knowledge acquisition

Judith Reitman Olson; Henry H. Rueter


Human-Computer Interaction | 1987

Analysis of the cognition involved in spreadsheet software interaction

Judith Reitman Olson; Erik Nilsen

Collaboration


Dive into the Judith Reitman Olson's collaboration.

Top Co-Authors

Avatar

John M. Carroll

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Gary M. Olson

University of California

View shared research outputs
Top Co-Authors

Avatar

F. J. Lerch

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gerald L. Lohse

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Neff Walker

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge