Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David J. Carney is active.

Publication


Featured researches published by David J. Carney.


Information & Software Technology | 1998

A basis for evaluation of commercial software

David J. Carney; Kurt C. Wallnau

The genesis of this paper lies in our observation that there are almost as many perspectives on the topic of software evaluation as there are evaluation techniques. Is this diversity an inherent characteristic of software evaluation itself, or is it reflective of the confusion found in an immature discipline? We believe that both are substantially true. In this paper we first state why we believe the conceptual space of software evaluation is so broad. We then develop some basic principles that provide structure and boundaries to this conceptual space. Although these principles apply to the evaluation of commercial-off-the-shelf software in particular, we believe they have relevance to the topic of software evaluation in general.


Journal of Software Maintenance and Evolution: Research and Practice | 2000

Complex COTS-based software systems: practical steps for their maintenance

David J. Carney; Scott A. Hissam; Daniel Plakosh

This paper makes pragmatic recommendations for the maintenance of complex COTS-based systems. We first enumerate the issues that can arise in systems that rely on COTS products, whether in operational systems themselves or in the support systems used to create, modify, or test operational systems. We then suggest principles by which maintenance practice for such systems can be facilitated, particularly for those safety-critical systems for which significant risk is present if they fail. These principles aim at making explicit, during system creation, the COTS-related development practices upon which successful system maintenance will subsequently depend. They also depend on a reasonable means of determining, during system maintenance, how much risk is acceptable in using new releases of COTS products. Copyright


Journal of Software Maintenance and Evolution: Research and Practice | 1999

Isolating faults in complex COTS-based systems

Scott A. Hissam; David J. Carney

Abstract : This monograph provides an overview of a method for isolating and overcoming faults in COTS-based systems. It provides a method and mechanisms that are useful for engineers and integrators who are tasked with assembling complex systems from heterogeneous sources. While other readers may find value in this report, it is specifically written for a technical audience. The method described in this monograph has been used on various systems. One such use is described in the SEI monograph Case Study: Correcting System Failure in a COTS Information System.


engineering of computer based systems | 1995

A case study in assessing the maintainability of large, software-intensive systems

Alan W. Brown; David J. Carney; Paul C. Clements

Maintenance of a computer-based system accounts for a large proportion of the total system cost. However, no well-established techniques exist for assessing the maintainability of such a system. This paper presents a case study in assessing the maintainability of a large, software intensive system. The techniques we used are described, and their strengths and weaknesses discussed.


european software engineering conference | 1995

Assessing the Quality of Large, Software-Intensive Systems: A Case Study

Alan W. Brown; David J. Carney; Paul C. Clements; B. Craig Meyers; Dennis B. Smith; Nelson H. Weiderman; William G. Wood

This paper presents a case study in carrying out an audit of a large, softwareintensive system. We discuss our experience in structuring the team for obtaining maximum effectiveness under a short deadline. We also discuss the goals of an audit, the methods of gathering and assimilating information, and specific lines of inquiry to be followed. We present observations on our approach in light of our experience and feedback from the customer. In the past decade, as engineers have attempted to build software-intensive systems of a scale not dreamed of heretofore, there have been extraordinary successes and failures. Those projects that have failed have often been spectacular and highly visible [3], particularly those commissioned with public money. Such failures do not happen all at once; like Brooks’ admonition that schedules slip one day at a time [2], failures happen incrementally. The symptoms of a failing project range from the subtle (a customer’s vague feelings of uneasiness) to the ridiculous (the vendor slips the schedule for the eighth time and promises that another


ACM Sigsoft Software Engineering Notes | 2005

Interoperability issues affecting autonomic computing

Dennis B. Smith; Edwin J. Morris; David J. Carney

30 million will fix everything). A project that has passed the “failure in progress” stage and gone on to full-fledged meltdown can be spotted by one sure symptom: the funding authority curtails payment and severely slows development. When that happens, the obvious question is asked by every involved party: “What now?” The answer is often an audit. This paper summarizes the experience of an audit undertaken by the Software Engineering Institute (SEI) in the summer of 1994 to examine a large, highly visible development effort exhibiting the meltdown symptom suggested above. The customer was a government agency in the process of procuring a large software-intensive system from a major contractor. The audit team included the authors of this paper, as well as members from other organizations. Members of the team had extensive backgrounds and expertise in software engineering, in large systems development,


1993 Software Engineering Environments | 1993

Towards a disciplined approach to the construction and analysis of sofware engineering environments

Alan W. Brown; David J. Carney

Most autonomic systems consist of a number of components and systems. These systems require a high degree of interoperability between the constituent components and systems. We describe current research on the topic of interoperability that has relevance for autonomic systems and list a set of critical properties of interoperability that need to be considered in designing autonomic systems.


Proceedings of the 2009 ICSE Workshop on Software Development Governance | 2009

Distributed Project Governance Assessment (DPGA): Contextual, hands-on analysis for project governance across sovereign boundaries

William Anderson; David J. Carney

In assembling a software engineering environment (SEE) from commercial off-the-shelf (COTS) tools and locally built components, a systematic approach is required that directs how the different pieces should be connected (i.e., integrated). This paper proposes such a method based on the notion that tool and component integration must take place in the context of a well-defined software process. The integration sufficiency profiles (ISPs) that result from this method can be used for constructing a new SEE or for analyzing the appropriateness of an existing SEE. A feature of this method is the emphasis on the use of ISPs in qualitative and quantitative analysis of a SEE.<<ETX>>


tri-ada | 1993

A project support environment reference model

Alan W. Brown; David J. Carney; Peter H. Feiler; Patricia A. Oberndorf; Marvin V. Zelkowitz

Managers of technology-intensive projects know that external dependencies beyond their direct control can pose significant risks to their performance. Unfortunately for them, there is a trend toward collaborative systems of systems with combined or threaded capabilities that generate many external dependencies. Todays world of networked, interoperating applications calls for a management perspective that seeks governance across sovereign boundaries.1 The Carnegie Mellon® Software Engineering Institute (SEI) proposes a Distributed Project Governance Assessment (DPGA) process, a work in progress, which helps organizations understand the number and characteristics of external dependencies. We seek to describe the proposed process and solicit community feedback on feasibility, shortcomings, or improvements.


Advances in Computers | 1995

On the Necessary Conditions for the Composition of Integrated Software Engineering Environments

David J. Carney; Alan W. Brown

Abstfact The Navy’s Next Generation Computer Resources (NGCR) program set up a Project Support Environment Standhmk Working Group (PSESWG) to help in the task qf establishing inkvjhce stan&rds that will allow the U.S. Navy to more em”ly and @ectively assembkksoftware-intensive Project Support Environments (PSEs)fkom commercial sources. A mqjor focus qfPSESWG is the devdbpment qfa stm”ce-based refenmce model that will provide the context for categorizing and rekuing existing standard and the ia%ntijication qfinteface areas that may bene)ltfromfutuw standHMlo “ n. This paper presents a report on this r@erence moa%l.

Collaboration


Dive into the David J. Carney's collaboration.

Top Co-Authors

Avatar

Patrick R. Place

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Lisa Brownsword

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Patricia A. Oberndorf

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Cecilia Albert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Edwin J. Morris

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Daniel Plakosh

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Dennis B. Smith

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Peter H. Feiler

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Scott A. Hissam

Software Engineering Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge