Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandra A. Slaughter is active.

Publication


Featured researches published by Sandra A. Slaughter.


IEEE Transactions on Software Engineering | 1999

An empirical approach to studying software evolution

Chris F. Kemerer; Sandra A. Slaughter

With the approach of the new millennium, a primary focus in software engineering involves issues relating to upgrading, migrating, and evolving existing software systems. In this environment, the role of careful empirical studies as the basis for improving software maintenance processes, methods, and tools is highlighted. One of the most important processes that merits empirical evaluation is software evolution. Software evolution refers to the dynamic behaviour of software systems as they are maintained and enhanced over their lifetimes. Software evolution is particularly important as systems in organizations become longer-lived. However, evolution is challenging to study due to the longitudinal nature of the phenomenon in addition to the usual difficulties in collecting empirical data. We describe a set of methods and techniques that we have developed and adapted to empirically study software evolution. Our longitudinal empirical study involves collecting, coding, and analyzing more than 25000 change events to 23 commercial software systems over a 20-year period. Using data from two of the systems, we illustrate the efficacy of flexible phase mapping and gamma sequence analytic methods, originally developed in social psychology to examine group problem solving processes. We have adapted these techniques in the context of our study to identify and understand the phases through which a software system travels as it evolves over time. We contrast this approach with time series analysis. Our work demonstrates the advantages of applying methods and techniques from other domains to software engineering and illustrates how, despite difficulties, software evolution can be empirically studied.


Communications of The ACM | 1996

Employment outsourcing in information systems

Sandra A. Slaughter; Soon Ang

growing trend. I N recent years, information systems (IS) outsourcing has become so pervasive it can no longer be ignored. An important question is why firms choose to outsource IS work. This question has been considered from a number of perspectives. Lacity and Hirschheim [14], for example, showed how the dynamics of internal politics led to IS outsourcing. Loh and Venkatraman [15] suggested that the outsourcing behavior of a prominent blue-chip company, such as Eastman Kodak, can lead to imitative behavior throughout the IS community. In contrast, our study examines the reasons for IS outsourcing from a somewhat different perspective----that of labor market economics. From this perspective, outsourcing is a result of how firms respond to the costs and benefits of employment arrangements with their IS workers. In the classic economic view of labor markets, workers move freely and frequently between jobs to take advantage of better employment opportunities. According to Doeringer and Piore [8], the traditional long-term employment arrangement replaced the open labor market because it afforded principals (employers) greater control and influence over agents (their employees). More recently, however, firms have been moving away from the traditional, long-term employment arrangement (insourcing) to relatively shorter-term, market-mediated arrangements (outsourcing). Outsourcing reflects the increasing trend toward “taking the workers back out” [19], in which organizations alter the work relationship with their employees by reducing the duration of employment and their degree of administrative control over workers. In IS outsourcing, taking the workers back out can occur in many ways. A firm can either contract directly with an IS professional for his or her services or contract indirectly with an employee leasing company, a consulting firm, or an IS service provider. Such practices can benefit both the firm and the IS worker. Although the Eastman Kodak outsourcing arrangement represents total IS outsourcing and has become a popular practice in industry, firms can also choose to selectively outsource for particular IS skills or jobs. But why do firms choose to selectively or completely outsource IS? From a labor market perspective, outsourcing is the response of firms to the costs and disadvantages of the traditional permanent work arrangement that arise from dynamic changes in technology and the environment. Due to the increasingly rapid evolution of information technology (IT), IS work is characterized by skills deterioration and specific skills shortages [16, 25]. Thus, a firm’s ability to find and acquire the necessary IS skills is paramount. Under these circumstances, relying on retraining a permanent work force may be cost prohibitive. In addition, because IT evolves so rapidly, by the time a firm invests in and trains its IS staff on a certain technology, that technology may be obsolete. There are indications that firms face increasing turbulence in the environment. As firms become inteEmployment Outsourcing


Management Science | 2007

Learning from Experience in Software Development: A Multilevel Analysis

Wai Fong Boh; Sandra A. Slaughter; J. Alberto Espinosa

This study examines whether individuals, groups, and organizational units learn from experience in software development and whether this learning improves productivity. Although prior research has found the existence of learning curves in manufacturing and service industries, it is not clear whether learning curves also apply to knowledge work like software development. We evaluate the relative productivity impacts from accumulating specialized experience in a system, diversified experience in related and unrelated systems, and experience from working with others on modification requests (MRs) in a telecommunications firm, which uses an incremental software development methodology. Using multilevel modeling, we analyze extensive data archives covering more than 14 years of systems development work on a major telecommunications product dating from the beginning of its development process. Our findings reveal that the relative importance of the different types of experience differs across levels of analysis. Specialized experience has the greatest impact on productivity for MRs completed by individual developers, whereas diverse experience in related systems plays a larger role in improving productivity for MRs and system releases completed by groups and organizational units. Diverse experience in unrelated systems has the least influence on productivity at all three levels of analysis. Our findings support the existence of learning curves in software development and provide insights into when specialized or diverse experience may be more valuable.


Communications of The ACM | 1998

Evaluating the cost of software quality

Sandra A. Slaughter; Donald E. Harter; Mayuram S. Krishnan

There is some confusion about the business value of quality even outside the software development context. On the one hand, there are those who believe that it is economical to maximize quality. This is the “quality is free” perspective espoused by Crosby [7], Juran and Gryna [8], and others. Their key argument is that as the voluntary costs of defect prevention are increased, the involuntary costs of rework decrease by much more than the increase in prevention costs. The net result is lower total costs, and thus quality is free. On the other hand, there are those who believe it is uneconomical to have high levels of quality and assume they must sacrifice quality to achieve other objectives such as reduced development cycles. For example, a study of adoption of the Software Engineering Institute’s Capability Maturity Model (CMM) reports the following quote from a software manager: “I’d rather have it wrong than have it late. We can always fix it later” [11]. Experiences in manufacturing relating to the cost


Management Science | 2002

Human Capital and Institutional Determinants of Information Technology Compensation: Modeling Multilevel and Cross-Level Interactions

Soon Ang; Sandra A. Slaughter; Kok Yee Ng

Compensation is critical in attracting and retaining information technology (IT) professionals. However, there has been very little research on IT compensation. Juxtaposing theories of compensation that focus on human capital endowments and labor market segmentation, we hypothesize multilevel and cross-level determinants of compensation. We use hierarchical linear modeling to analyze archival salary data for 1,576 IT professionals across 39 institutions. Results indicate that compensation is directly determined by human capital endowments of education and experience. Institutional differentials do not directly drive compensation, but instead moderate the relationship of human capital endowments to compensation. Large institutions pay more than small institutions to IT professionals with more education, while small institutions pay more than large institutions to IT professionals with less education. Not-for-profit institutions pay more than for-profits to IT professionals with more or IT-specific education. Further, information-intensive institutions pay more than non information-intensive institutions to IT professionals with more or IT-specific education. We interpret these results in the context of institutional rigidity, core competencies, and labor shortages in the IT labor market.


Information Systems Research | 2000

The Moderating Effects of Structure on Volatility and Complexity in Software Enhancement

Rajiv D. Banker; Sandra A. Slaughter

The cost of enhancing software applications to accommodate new and evolving user requirements is significant. Many enhancement cost-reduction initiatives have focused on increasing software structure in applications. However, while software structure can decrease enhancement effort by localizing data processing, increased effort is also required to comprehend structure. Thus, it is not clear whether high levels of software structure are economically efficient in all situations. In this study, we develop a model of the relationship between software structure and software enhancement costs and errors. We introduce the notion of software structure as a moderator of the relationship between software volatility, total data complexity, and software enhancement outcomes. We posit that it is efficient to more highly structure the more volatile applications, because increased familiarity with the application structure through frequent enhancement enables localization of maintenance effort. For more complex applications, software structure is more beneficial than for less complex applications because it facilitates the comprehension process where it is most needed. Given the downstream enhancement benefits of structure for more volatile and complex applications, we expect that the optimal level of structure is higher for these applications. We empirically evaluate our model using data collected on the business applications of a major mass merchandiser and a large commercial bank. We find that structure moderates the relationship between complexity, volatility, and enhancement outcomes, such that higher levels of structure are more advantageous for the more complex and more volatile applications in terms of reduced enhancement costs and errors. We also find that more structure is designed in for volatile applications and for applications with higher levels of complexity. Finally, we identify application type as a significant factor in predicting which applications are more volatile and more complex at our research sites. That is, applications with induction-based algorithms such as those that support planning, forecasting, and management decision-making activities are more complex and more volatile than applications with rule-based algorithms that support operational and transaction-processing activities. Our results indicate that high investment in software quality practices such as structured design is not economically efficient in all situations. Our findings also suggest the importance of organizational mechanisms in promoting efficient design choices that lead to reduced enhancement costs and errors.


IEEE Transactions on Software Engineering | 2005

The structural complexity of software an experimental test

David P. Darcy; Chris F. Kemerer; Sandra A. Slaughter; James E. Tomayko

This research examines the structural complexity of software and, specifically, the potential interaction of the two dominant dimensions of structural complexity, coupling and cohesion. Analysis based on an information processing view of developer cognition results in a theoretically driven model with cohesion as a moderator for a main effect of coupling on effort. An empirical test of the model was devised in a software maintenance context utilizing both procedural and object-oriented tasks, with professional software engineers as participants. The results support the model in that there was a significant interaction effect between coupling and cohesion on effort, even though there was no main effect for either coupling or cohesion. The implication of this result is that, when designing, implementing, and maintaining software to control complexity, both coupling and cohesion should be considered jointly, instead of independently. By providing guidance on structuring software for software professionals and researchers, these results enable software to continue as the solution of choice for a wider range of richer, more complex problems.


IEEE Software | 2003

Is "Internet-speed" software development different?

Richard Baskerville; Balasubramaniam Ramesh; Linda Levine; Jan Pries-Heje; Sandra A. Slaughter

Developing software at Internet speed requires a flexible development environment that can cope with fast-changing requirements and an increasingly demanding market. Agile principles are better suited than traditional software development principles to provide such an environment.


Information Systems Research | 2009

Introduction to the Special Issue---Flexible and Distributed Information Systems Development: State of the Art and Research Challenges

Pär J. Ågerfalk; Brian Fitzgerald; Sandra A. Slaughter

Process flexibility and globally distributed develop-ment are two major current trends in software andinformation systems development (ISD). The questfor flexibility is very much evident in the recent devel-opment and increasing acceptance of various agilemethods, such as eXtreme Programming (Beck andAndres 2005) and Scrum (Schwaber and Beedle 2002).Agile development methods are examples of appar-ently major success stories that seem to have runcounter to the prevailing wisdom in information sys-tems (IS) and software engineering. However, ratherthan being antimethod, agile approaches operate onthe principle of “just enough method.” The quest forflexibility is also apparent in the currently increasinginterest in striking a balance between the rigor of tra-ditional approaches and the need for adaptation ofthose approaches to suit particular development situ-ations. Although suitable methods may exist, devel-opers struggle in practice when selecting methodsand tailoring them to suit their needs. Certainly,agile methods are not exempt from this problem asthey too need to be flexibly tailored to the devel-opment context at hand (Fitzgerald et al. 2006a).Distributed development recognizes that, more andmore, ISD takes place in globally distributed settings.This is perhaps most evident in the many cases ofoffshoring and outsourcing of software developmentto low-cost countries (King and Torkzadeh 2008). Dis-tributed development places new demands on thedevelopment process through the increased complex-ity related to communication, coordination, cooper-ation, control, and culture, as well as to technologyand tools. Interestingly, many of the difficulties facedin globally distributed ISD are the same issues sur-faced by agile methods and development flexibility ingeneral.It is something of an irony that the special issuebefore us appears on the bicentenary of Darwin’sbirth. Evolutionary theory suggests that success andsurvival are not the preserve of the strongest nor themost intelligent. Rather, the ability to adapt to chang-ing circumstances is the key trait. Flexibility, one ofthe twin primary points of focus for this special issue,addresses this trait directly. A further parallel is thatDarwin’s theory of evolution was best exemplified bydifferences across different spatial locations. This isalso inherent in the second focal point for the specialissue dual focus—distributed development.


IEEE Computer | 2001

How Internet software companies negotiate quality

Richard Baskerville; Linda Levine; Jan Pries-Heje; Sandra A. Slaughter

In Internet speed development, innovation and time-to-market work against software quality. Browser giants like Microsoft Internet Explorer and Netscape Navigator are openly dealing with quality issues. The practices of application and smaller niche firms are less clear, but there are important trends.

Collaboration


Dive into the Sandra A. Slaughter's collaboration.

Top Co-Authors

Avatar

Soon Ang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald E. Harter

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Damien Joseph

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Forman

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge