Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris F. Kemerer is active.

Publication


Featured researches published by Chris F. Kemerer.


IEEE Transactions on Software Engineering | 1994

A metrics suite for object oriented design

Shyam R. Chidamber; Chris F. Kemerer

Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the fields understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyukers (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. >


Management Science | 1996

Network externalities in microcomputer software: an econometric analysis of the spreadsheet market

Erik Brynjolfsson; Chris F. Kemerer

Because of network externalities, the success of a software product may depend in part on stalled base and its conformance to industry standards. This research builds a hedonic model to determine the effects of network externalities, standards, intrinsic features and a time trend on microcomputer spreadsheet software prices. When data for a sample of products during the 1987-1992 time period were analyzed using this model, four main results emerged: 1 Network externalities, as measured by the size of a products installed base, significantly increased the price of spreadsheet products: a one percent increase in a products installed base was associated with a 0.75% increase in its price. 2 Products which adhered to the dominant standard, the Lotus menu tree interface, commanded prices which were higher by an average of 46%. 3 Although nominal prices increased slightly during this time period, quality-adjusted prices declined by an average of 16% per year. 4 The hedonic model was found to be a good predictor of actual market prices, despite the fact that it was originally estimated using list prices. Several variations of the model were examined, and, while the qualitative findings were robust, the precise estimates of the coefficients varied somewhat depending on the sample of products examined, the weighting of the observations and the functional form used in estimation, suggesting that the use of hedonic methods in this domain is subject to a number of limitations due, inter alia, to the potential for strategic pricing by vendors.


Information Systems Research | 1999

The Illusory Diffusion of Innovation: An Examination of Assimilation Gaps

Robert G. Fichman; Chris F. Kemerer

Innovation researchers have known for sometime that a new information technology maybe widely acquired, but then only sparsely deployed among acquiring firms. When this happens, the observed pattern of cumulative adoptions will vary depending on which eventin the assimilation process (i.e., acquisition or deployment) is treated as the adoption event. Instead of mirroring one another, a widening gap-termed here an assimilation gap-will existbetween the cumulative adoption curves associated with the alternatively conceived adoption events. When a pronounced assimilation gap exists, the common practice of using cumulative purchases or acquisitions as the basis for diffusion modeling can present an illusory picture of the diffusion process-leading to potentially erroneous judgments about the robustness ofthe diffusion process already observed, and of the technologys future prospects. Researchers may draw inappropriate theoretical inferences about the forces driving diffusion. Practitioners may commit to a technology based on a belief that pervasive adoption is inevitable, when it is not. This study introduces the assimilation gap concept, and develops a general operational measure derived from the difference between the cumulative acquisition and deployment patterns. It describes how two characteristics-increasing returns to adoption and knowledge barriers impeding adoption-separately and in combination may serve to predispose a technology to exhibit a pronounced gap. It develops techniques for measuring assimilation gaps, for establishing whether two gaps are significantly different from each other, and for establishing whether a particular gap is absolutely large enough to be of substantive interest. Finally, it demonstrates these techniques in an analysis of adoption data for three prominent innovations in software process technology-relational database management systems (RDBs), general purpose fourth generation languages (4GLs), and computer aided software engineering tools (CASE). The analysis confirmed that assimilation gaps can be sensibly measured, and that their measured size is largely consistent with a priori expectations and recent research results. A very pronounced gap was found for CASE, while more moderate-though still significant-gaps were found for RDBs and 4GLs. These results have the immediate implication that, where the possibility of a substantial assimilation gap exists, the time of deployment should be captured instead of, or in addition to, time of acquisition as the basis for diffusion modeling. More generally, the results suggest that observers be guarded about concluding, based on sales data, that an innovation is destined to become widely used. In addition, by providing the ability to analyze and compare assimilation gaps, this study provides an analytic foundation for future research on why assimilation gaps occur, and what might be done to reduce them.


IEEE Transactions on Software Engineering | 1998

Managerial use of metrics for object-oriented software: an exploratory analysis

Shyam R. Chidamber; David P. Darcy; Chris F. Kemerer

With the increasing use of object-oriented methods in new software development, there is a growing need to both document and improve current practice in object-oriented design and development. In response to this need, a number of researchers have developed various metrics for object-oriented systems as proposed aids to the management of these systems. In this research, an analysis of a set of metrics proposed by Chidamber and Kemerer (1994) is performed in order to assess their usefulness for practising managers. First, an informal introduction to the metrics is provided by way of an extended example of their managerial use. Second, exploratory analyses of empirical data relating the metrics to productivity, rework effort and design effort on three commercial object-oriented systems are provided. The empirical results suggest that the metrics provide significant explanatory power for variations in these economic variables, over and above that provided by traditional measures, such as size in lines of code, and after controlling for the effects of individual developers.


IEEE Transactions on Software Engineering | 1999

An empirical approach to studying software evolution

Chris F. Kemerer; Sandra A. Slaughter

With the approach of the new millennium, a primary focus in software engineering involves issues relating to upgrading, migrating, and evolving existing software systems. In this environment, the role of careful empirical studies as the basis for improving software maintenance processes, methods, and tools is highlighted. One of the most important processes that merits empirical evaluation is software evolution. Software evolution refers to the dynamic behaviour of software systems as they are maintained and enhanced over their lifetimes. Software evolution is particularly important as systems in organizations become longer-lived. However, evolution is challenging to study due to the longitudinal nature of the phenomenon in addition to the usual difficulties in collecting empirical data. We describe a set of methods and techniques that we have developed and adapted to empirically study software evolution. Our longitudinal empirical study involves collecting, coding, and analyzing more than 25000 change events to 23 commercial software systems over a 20-year period. Using data from two of the systems, we illustrate the efficacy of flexible phase mapping and gamma sequence analytic methods, originally developed in social psychology to examine group problem solving processes. We have adapted these techniques in the context of our study to identify and understand the phases through which a software system travels as it evolves over time. We contrast this approach with time series analysis. Our work demonstrates the advantages of applying methods and techniques from other domains to software engineering and illustrates how, despite difficulties, software evolution can be empirically studied.


Communications of The ACM | 1993

Software complexity and maintenance costs

Rajiv D. Banker; Srikant M. Datar; Chris F. Kemerer; Dani Zweig

While the link between the difficulty in understanding computer software and the cost of maintaining it is appealing, prior empirical evidence linking software complexity to software maintenance costs is relatively weak [21]. Many of the attempts to link software complexity to maintainability are based on experiments involving small pieces of code, or are based on analysis of software written by students. Such evidence is valuable, but several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures [13,17]


Communications of The ACM | 1993

Reliability of function points measurement: a field experiment

Chris F. Kemerer

S oftware engineering management encompasses two major functions, planning and control, both of which require the capability to accurately and reliably measure the software being delivered. Planning of software development projects emphasizes estimation of appropriate budgets and schedules. Control of software development requires a means to measure progress on the project and to perform after-the-fact evaluations of the project, for example, to evaluate the effectiveness of the tools and techniques employed on the project to improve productivity.


IEEE Transactions on Software Engineering | 1989

Scale Economies in New Software Development

Rajiv D. Banker; Chris F. Kemerer

Abslmet-ln this paper we reconcile two opposing views regarding the presence of economies or diseconomies of scale In new software development. Our general approach hypothesizes a production function model of software development thot &lows for both increasing and drsmsing returns to sede, and argues that local scale economies or diseconomies depend upon the size of projects. Using eight different data sets, including several reported In previous rese8rch on the subject, we provide tmpirkd evidence in support of our hypothesis. Through the use of the nonparametric DEA technique we also show how to identify the mmt productive scale size that may vary across organizations. Index Tcftns-Dat. envelopment analysis, function points, productivity measurement, sepk economies. software development, source lines of code.


decision support systems | 1992

Recent applications of economic theory in Information Technology research

J. Yannis Bakos; Chris F. Kemerer

Abstract Academicians and practitioners are becoming increasingly interested in the economics of Information Technology (IT). In part, this interest stems from the increased role that IT now plays in the strategic thinking of most large organizations, and from the significant dollar costs expended by these organizations on IT. Naturally enough, researchers are turning to economics as a reference discipline in their attempt to answer questions concerning both the value added by IT and the true cost of providing IT resources. This increased interest in the economics of IT is manifested in the application of a number of aspects of economic theory in recent information systems research, leading to results that have appeared in a wide variety of publication outlets This article reviews this work and provides a systematic categorization as a first step in establishing a common research tradition, and to serve as an introduction for researchers beginning work in this area. Six areas of economic theory are represented: Information economics, production economics, economic models of organizational performance, industrial organization, institutional economics (agency theory and transaction cost theory), and macroeconomic studies of IT impact. For each of these areas, recent work is reviewed and suggestions for future research are provided.


IEEE Software | 1992

Now the learning curve affects CASE tool adoption

Chris F. Kemerer

Part of adopting an industrial process is to go through a learning curve that measures the rate at which the average unit cost of production decreases as the cumulative amount produced increases. It is argued that organizations buy integrated CASE tools only to leave them on the shelf because they misinterpret the learning curve and its effect on productivity. It is shown that learning-curve models can quantitatively document the productivity effect of integrated CASE tools by factoring out the learning costs so that managers can use model results to estimate future projects with greater accuracy. Without this depth of understanding, managers are likely to make less-than-optimal decisions about integrated CASE and may abandon the technology too soon. The influence of learning curves on CASE tools and the adaptation of learning-curve models to integrate CASE are discussed. The three biggest tasks in the implementation of learning-curves in integrated CASE settings, locating a suitable data site, collecting the data, and validating the results, are also discussed.<<ETX>>

Collaboration


Dive into the Chris F. Kemerer's collaboration.

Top Co-Authors

Avatar

Sandra A. Slaughter

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shyam R. Chidamber

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael A. Cusumano

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles Zhechao Liu

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Erik Brynjolfsson

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge