Carl G. Davis
University of Alabama in Huntsville
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carl G. Davis.
IEEE Transactions on Software Engineering | 2002
Jagdish Bansiya; Carl G. Davis
The paper describes an improved hierarchical model for the assessment of high-level design quality attributes in object-oriented designs. In this model, structural and behavioral design properties of classes, objects, and their relationships are evaluated using a suite of object-oriented design metrics. This model relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. The relationship or links from design properties to quality attributes are weighted in accordance with their influence and importance. The model is validated by using empirical and expert opinion to compare with the model results on several large commercial object-oriented systems. A key attribute of the model is that it can be easily modified to include different relationships and weights, thus providing a practical quality assessment tool adaptable to a variety of demands.
IEEE Computer | 1997
Letha H. Etzkorn; Carl G. Davis
Much object oriented code has been written without reuse in mind, making identification of useful components difficult. The Patricia (Program Analysis Tool for Reuse) system automatically identifies these components through understanding comments and identifiers. To understand a program, Patricia uses a unique heuristic approach, deriving information from the linguistic aspects of comments and identifiers and from other nonlinguistic aspects of OO code, such as a class hierarchy. In developing the Patricia system, we had to overcome the problems of syntactically parsing natural language comments and syntactically analyzing identifiers-all prior to a semantic understanding of the comments and identifiers. Another challenge was the semantic understanding phase, when the organization of the knowledge base and an inferencing scheme were developed.
european software engineering conference | 1999
Jagdish Bansiya; Carl G. Davis; Letha H. Etzkorn
The use of entropy as a measure of information content has led to its use in measuring the code complexity of functionally developed software products; however, no similar capability exists for evaluating complexities of object-oriented systems using entropy. In this paper a new metric based on entropy as a complexity measure for object-oriented classes is defined and validated using several large commercial object-oriented projects. The metric is computed using information available in class definitions. The new complexity measure of classes is correlated with traditional complexity measures such as McCabes cyclomatic metric and the number-of-defects metric, both of which were evaluated from the implementation of the methods of the classes. The correlation study used the final versions of the class definitions. The high degree of positive correlation between the entropy-based class definition measure and the traditional measures of class implementation complexity verify that the new entropy measure computed from class definitions can be used as a predictive measure for class implementation complexities provided the class definitions do not change significantly during the implementation.
Information & Software Technology | 2001
Letha H. Etzkorn; William E. Hughes; Carl G. Davis
Abstract Software reuse increases productivity, reduces costs, and improves quality. Object-oriented (OO) software has been shown to be inherently more reusable than functionally decomposed software; however, most OO software was not specifically designed for reuse [Software Reuse Guidelines and Methods, Plenum Press, New York, 1991]. This paper describes the analysis, in terms of quality factors related to reusability, contained in an approach that aids significantly in assessing existing OO software for reusability. An automated tool implementing the approach is validated by comparing the tools quality determinations to that of human experts. This comparison provides insight into how OO software metrics should be interpreted in relation to the quality factors they purport to measure.
Natural Language Engineering | 1999
Letha H. Etzkorn; Lisa L. Bowen; Carl G. Davis
An automated tool to assist in the understanding of legacy code components can be useful both in the areas of software reuse and software maintenance. Most previous work in this area has concentrated on functionally-oriented code. Whereas object-oriented code has been shown to be inherently more reusable than functionally-oriented code, in many cases the eventual reuse of the object-oriented code was not considered during development. A knowledge-based, natural language processing approach to the automated understanding of object-oriented code as an aid to the reuse of object-oriented code is described. A system, called the PATRicia system (Program Analysis Tool for Reuse) that implements the approach is examined. The natural language processing/information extraction system that comprises a large part of the PATRicia system is discussed and the knowledge-base of the PATRicia system, in the form of conceptual graphs, is described. Reports provided by natural language-generation in the PATRicia system are described.
Journal of Pragmatics | 2001
Letha H. Etzkorn; Carl G. Davis; Lisa L. Bowen
Abstract A sublanguage is a subset of a natural language such as the English language. Sublanguages tend to emerge gradually through the use of a language in various fields by specialists in those fields. Some such sublanguages are the ‘language of biophysics’ and the ‘language of naval telegraphic transmissions’. This paper explores whether English-language comments in object-oriented software can be considered to be a sublanguage of English, using standard criteria for sublanguage determination. To make this determination, the article looks at the grammatical content of comments, including: sentence-style comments versus non-sentence-style comments, and the use of tense, mood, and voice in sentence-style comments. The telegraphic nature of comments is also examined. Additionally, the subject-matter of comments is analyzed in terms of the purpose of comments in describing the operation of computer software.
Information & Software Technology | 2000
Wei Li; Letha H. Etzkorn; Carl G. Davis; John R. Talburt
Abstract Software metrics have been used to measure software artifacts statically—measurements are taken after the artifacts are created. In this study, three metrics—System Design Instability (SDI), Class Implementation Instability (CII), and System Implementation Instability (SII)—are used for the purpose of measuring object-oriented (OO) software evolution. The metrics are used to track the evolution of an OO system in an empirical study. We found that once an OO project starts, the metrics can give good indications of project progress, e.g. how mature the design and implementation is. This information can be used to adjust the project plan in real time. We also performed a study of design instability that examines how the implementation of a class can affect its design. This study determines that some aspects of OO design are independent of implementation, while other aspects are dependent on implementation.
Knowledge Based Systems | 1996
Letha H. Etzkorn; Carl G. Davis
An automated tool to assist in the understanding of legacy code can be useful both in the areas of software reuse and software maintenance. Most previous work in this area has concentrated on functionally-oriented code. Whereas object-oriented code has been shown to be inherently more reusable than functionally-oriented code, in many cases the eventual reuse of the object-oriented code was not considered during development. This paper describes an approach that makes preparing existing object-oriented code for reuse easier and more quantifiable. This problem includes two primary sub-problems: understanding the function of components, and applying an appropriate set of metrics to the components to quantify reusability.The research described in this paper addresses an approach to the automated understanding of object-oriented code. A knowledge-based system that implements the approach is described. This paper also briefly discusses the formulation of reusability metrics for object-oriented code.
workshop on program comprehension | 1994
Letha H. Etzkorn; Carl G. Davis
Object-oriented code is considered to be inherently more reusable than functional decomposition code; however, object-oriented code can suffer from a program understanding standpoint since good object-oriented style seems to require a large number of small methods. Hence code for a particular task may be scattered widely. Thus good semantics based tools are necessary. This paper describes an approach to object-oriented code understanding that focuses largely on informal linguistic aspects of code, such as comments and identifiers.<<ETX>>
international conference on engineering of complex computer systems | 1996
Letha H. Etzkorn; Carl G. Davis; Lisa L. Bowen; David B. Etzkorn; L. W. Lewis; Bradley L. Vinz; Janet C. Wolf
Software reuse has been demonstrated to increase productivity, reduce costs and improve software quality. Most research in the area of extraction of reusable code from legacy code has concentrated on code created in the functional decomposition paradigm. However, in recent years much object-oriented code has been written. This paper describes a knowledge-based approach to the identification of reusable components in object-oriented legacy code.