F. Javier Lerch
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by F. Javier Lerch.
conference on computer supported cooperative work | 1998
Susan R. Fussell; Robert E. Kraut; F. Javier Lerch; William L. Scherlis; Matthew M. McNally; Jonathan J. Cadiz
The god of this paper is to iden@ the communication tactics that tiow management teams to successtiy coordinate without becoming overloaded, and to see whether successti coordination and fidom from overload independently Muence team pefiormance. We found that how much teams comnumicatti, what they communicated abou~ and the technologies they used to communicate prdlcted coordination and overload. Team coordination but not overload prdlcted team SUWSS.
decision support systems | 1997
Tridas Mukhopadhyay; F. Javier Lerch; Vandana Mangal
Abstract Measuring and understanding the productivity impact of information technology (IT) is a significant and difficult problem facing researchers. We propose that the effect of IT applications can be best understood through an analysis at the information process level. We report on a field research conducted to study the impact of IT on the toll collection system of 38 interchanges at the Pennsylvania Turnpike. We focus on an information process that is relatively uncoupled from other processes in the organization to constrain the measurement problem. In addition, this process consists of well structured information processing tasks allowing us to gain a clear understanding of the economic impact of IT on different types of transactions. Our results indicate that the new IT at the turnpike had a substantial impact on the efficiency of processing complex transactions but no impact on simple transactions. These results can only be understood by examining the nature of the toll collection process and the changes generated by the new IT on some of the information processing tasks.
ACM Transactions on Computer-Human Interaction | 1997
Brian R. Huguenard; F. Javier Lerch; Brian W. Junker; Richard J. Patz; Robert E. Kass
This article investigates working-memory (WM) failure in phone-based interaction (PBI). We used a computational model of phone-based interaction (PBI USER) to generate predictions about the impact of three factors on WM failure:PBI features (i.e. menu structure), individual differences (i.e., WM capacity), and task characteristics (i.e., number of tasks). Our computational model stipulates that both the storage and the processing of information contribute to WM failure. In practical terms the model and the empirical results indicate that, contrary to guidelines for the design of phone-based interfaces, deep menu hierarchies (no more than three options per menu) do not reduce WM error rates in PBI. At a more theoretical level, the study shows that the use of a computational model in HCI research provides a systematic approach for explaining complex empirical results.
Information Systems Research | 1997
Jinwoo Kim; F. Javier Lerch
Our theoretical framework views programming as search in three problem spaces: rule, instance, and representation. The main objectives of this study are to find out how programmers change representation while working in multiple problem spaces, and how representation change increases the difficulty of programming tasks. Our theory of programming indicates that programming is similar to the way scientists discover and test theories. That is, programmers generate hypotheses in the rule space and test these hypotheses in the instance space. Moreover, programmers change their representations in the representation space when rule development becomes too difficult or alternative representations are available. We conducted three empirical studies with different programming tasks: writing a new program, understanding an existing program, and reusing an old program. Our results indicate that considerable cognitive difficulties stem from the need to change representations in these tasks. We conclude by discussing the implications of viewing programming as a scientific discovery for the design of programming environments and training methods.
Information Systems Research | 1996
Marie Christine Roy; F. Javier Lerch
Many biases have been observed in probabilistic reasoning, hindering the ability to follow normative rules in decision-making contexts involving uncertainty. One systematic error people make is to neglect base rates in situations where prior beliefs in a hypothesis should be taken into account when new evidence is obtained. Incomplete explanations for the phenomenon have impeded the development of effective debiasing procedures or tools to support decision making in this area. In this research, we show that the main reason behind these judgment errors is the causal representation induced by the problem context. In two experiments we demonstrate that people often possess the appropriate decision rules but are unable to apply them correctly because they have an ineffective causal mental representation. We also show how this mental representation may be modified when a graph is used instead of a problem narrative. This new understanding should contribute to the design of better decision aids to overcome this...
ACM Transactions on Computer-Human Interaction | 1995
Jinwoo Kim; F. Javier Lerch; Herbert A. Simon
This article proposes a cognitive framework describing the software development process in object-oriented design (OOD) as building internal representations and developing rules. Rule development (method construction) is performed in two problem spaces: a rule space and an instance space. Rules are generated, refined, and evaluated in the rule space by using three main cognitive operations: Infer, Derive, and Evoke. Cognitive activities in the instance space are called mental simulations and are used in conjunction with the Infer operation in the rule space. In an empirical study with college students, we induced different representations to the same problem by using problem isomorphs. Initially, subjects built a representation based on the problem description. As rule development proceeded, the initial internal representation and designed objects were refined, or changed if necessary, to correspond to knowledge gained during rule development. Differences in rule development processes among groups created final designs that are radically different in terms of their level of abstraction and potential reusability. The article concludes by discussing the implications of these results for object-oriented design.
Annals of Software Engineering | 1997
F. Javier Lerch; Deborah J. Ballou; Donald E. Harter
We describe the use of simulation‐based experiments to assess the computer support needs of automation supervisors in the United States Postal Service (USPS). Because of the high cost of the proposed system, the inability of supervisors to articulate their computer support needs, and the infeasibility of direct experimentation in the actual work environment, we used a simulation to study end‐user decision making, and to experiment with alternative computer support capabilities. In Phase One we investigated differences between expert and novice information search and decision strategies in the existing work environment. In Phase Two, we tested the impact of computer support features on performance. The empirical results of the two experiments showed how to differentially support experts and novices, and the effectiveness of proposed information systems before they were built. The paper concludes by examining the implications of the project for the software requirements engineering community.
Journal of the American Statistical Association | 1992
Bradley P. Carlin; Robert E. Kass; F. Javier Lerch; Brian R. Huguenard
Abstract We use Bayes factors to compare two alternative characterizations of human working memory load in their ability to predict errors in database query-writing tasks. The first measures working memory load by the number of different features each task contains, while the second attempts instead to measure the complexity of the task by giving more weight to features that require more mental time for their correct execution. We reanalyze data from a previously conducted experiment using two logistic regression models with random subject effects nested within an experimental condition factor. The two models have alternative covariates based on the alternative measures of working memory load. We construct prior distributions based on our subjective knowledge gleaned from related experiments, providing details of the elicitation process. We examine sensitivity of our results to the effects of prior misspecification and case deletion. Asymptotic approximations are used throughout to facilitate computations...
Annals of Software Engineering | 1999
Nick V. Flor; F. Javier Lerch; Se-Joon Hong
The emergence of software component standards and tools for creating software components is leading to an increasing number of software component developers. Traditional software engineering education, however, emphasizes methods for developing large software packages. It is not clear whether such methods are appropriate for developing components. New techniques may be needed to teach the skills necessary for component development. We identify two skills software developers need to successfully develop components, which are not emphasized in traditional software engineering education: (a) uncovering multiple-customer domain semantics; and (b) making explicit multiple-customer framework semantics. Both skills are multiple constraint satisfaction problems. We further argue that training students to produce and market components in a simulated software components marketplace – rather than the more conventional “classroom teaching” + “component homework assignments/projects” – is an effective way of teaching such skills. We then describe an environment we created called SofTrade that simulates a components market and allows students to acquire the necessary skills. We provide a detailed case study of how a student component-producer team used market feedback to determine domain and framework semantics. We end by discussing the importance of market-driven approaches for teaching software components engineering and how such approaches fit into existing software engineering curricula.
ACM Sigchi Bulletin | 1990
Brian R. Huguenard; Michael J. Prietula; F. Javier Lerch
Human expertise is a critical resource and will become increasingly so as societys tools and techniques for acquisition, creation, distribution, control, and management of information become more knowledge intensive. One claim, echoed by developers of expert systems, is that human expertise is fragile -- changes in the problem or problem context may result in dramatic performance degradations (Brown & Campione, 1984; Reed, Ernst & Banerji, 1974). Consequently, systems constructed from the knowledge of experts inherit this flaw and its ramifications. Although the concept of fragility has an intuitive appeal, few studies have been conducted to explicate the nature of this phenomena, that is, few studies have attempted to discover where and how such fragility is manifest. With this study we are beginning to explicate the nature of expert fragility. The plan of the reported study is to compare how experts and novices perform on a task that has been modified to degrade the performance of the expert to that of the novice, but still permits the behavior of the expert and novice to be investigated in detail.