Allen S. Parrish
University of Alabama
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Allen S. Parrish.
Journal of Systems and Software | 1993
Allen S. Parrish; Richard B. Borie; David Cordes
Abstract Classes represent the fundamental building blocks in object-oriented software development. Several techniques have been proposed for testing classes. However, most of these techniques are heavily specification based, in the sense that they demand the existence of formal specifications for the module. In addition, most existing techniques generate test cases at random rather than systematically. We present some test case generation techniques that are based entirely on class implementation, involve systematic generation of test cases, and are fully automated. Our techniques are based on an adaptation of existing white-box, flow graph-based techniques for unit testing conventional procedures and functions. We also provide a general conceptual framework to support the modeling of classes using flow graphs. Our framework clarifies the fundamental definitions and concepts associated with this method for modeling classes.
IEEE Transactions on Software Engineering | 1991
Allen S. Parrish; Stuart H. Zweben
Test data adequacy criteria are standards that can be applied to decide if enough testing has been performed. Previous research in software testing has suggested 11 fundamental properties which reasonable criteria should satisfy if the criteria make use of the structure of the program being tested. It is shown that there are several dependencies among the 11 properties making them questionable as a set of fundamental properties, and that the statements of the properties can be generalized so that they can be appropriately analyzed with respect to criteria that do not necessarily make use of the programs structure. An analysis that shows the relationships among the properties with respect to different classes of criteria which utilize the program structure and the specification in different ways is discussed. It is shown how the properties differ under the two models in order to maintain consistency that the dependencies are largely a result of five very weak existential properties, and that by modifying three of the properties, these weaknesses can be eliminated. The result is a reduced set of seven properties, each of which is strong from a mathematical perspective. >
Journal of Systems and Software | 2001
Allen S. Parrish; Brandon Dixon; David Cordes
Abstract We use the term component-based software deployment (CBSD) to refer to the process of deploying a software application in a component-based format. In this paper, we propose a formal conceptual framework for CBSD. This framework allows us to articulate various strategies for deploying component-based software. In addition, the framework permits us to express conditions under which various forms of CBSD are both successful (the deployed application works) and safe (no existing applications are damaged).
IEEE Software | 2000
Joanne E. Hale; Allen S. Parrish; Brandon Dixon; Randy K. Smith
In software engineering, team task assignments appear to have a significant potential impact on a projects overall success. The authors propose task assignment effort adjustment factors that can help tune existing estimation models. They show significant improvements in the predictive abilities of both Cocomo I and II by enhancing them with these factors.
Information Systems | 1999
Nenad Jukic; Susan V. Vrbsky; Allen S. Parrish; Brandon Dixon; Boris Jukic
Abstract Multilevel relations, based on the current multilevel secure (MLS) relational data models, can present a user with information that is difficult to interpret and may display an inconsistent outlook about the views of other users. Such ambiguity is due to the lack of a comprehensive method for asserting and interpreting beliefs about information at lower security levels. In this paper we present a belief-consistent MLS relational database model which provides an unambiguous interpretation of all visible information and gives the user access to the beliefs of users at lower security levels, neither of which was possible in any of the existing models. We identify different beliefs that can be held by users at higher security levels about information at lower security levels, and introduce a mechanism for asserting beliefs about all accessible tuples. This mechanism provides every user with an unambiguous interpretation of all viewable information and presents a consistent account of the views at all levels visible to the user. In order to implement this assertion mechanism, new database operations, such as verify true and verify false, are presented. We specify the constraints for the write operations, such as update and delete, that maintain belief consistency and redefine the relational algebra operations, such as select, project, union, difference and join.
IEEE Transactions on Software Engineering | 1993
Allen S. Parrish; Stuart H. Zweben
A software test data adequacy criterion is a means for determining whether a test set is sufficient, or adequate, for testing a given program. A set of properties that useful adequacy criteria should satisfy have been previously proposed (E. Weyuker, 1986; 1988). The authors identify some additional properties of useful adequacy criteria that are appropriate under certain realistic models of testing. They discuss modifications to the formal definitions of certain popular adequacy criteria to make the criteria consistent with these additional properties. >
IEEE Transactions on Software Engineering | 1995
Allen S. Parrish; Stuart H. Zweben
The all-du-paths data flow testing criterion was designed to be more demanding than the all-uses criterion, which itself was designed to be more demanding than the all-edges criterion. However, formal comparison metrics developed within the testing community have failed to validate these relationships, without requiring restrictive or undecidable assumptions regarding the universe of programs to which the criteria apply. We show that the formal relationships among these criteria can be made consistent with their intended relative strengths, without making restrictive or undecidable assumptions.
Knowledge Based Systems | 2011
Li Ding; Dana Steil; Brandon Dixon; Allen S. Parrish; David B. Brown
Strong ties play a crucial role in transmitting sensitive information in social networks, especially in the criminal justice domain. However, large social networks containing many entities and relations may also contain a large amount of noisy data. Thus, identifying strong ties accurately and efficiently within such a network poses a major challenge. This paper presents a novel approach to address the noise problem. We transform the original social network graph into a relation context-oriented edge-dual graph by adding new nodes to the original graph based on abstracting the relation contexts from the original edges (relations). Then we compute the local k-connectivity between two given nodes. This produces a measure of the robustness of the relations. To evaluate the correctness and the efficiency of this measure, we conducted an implementation of a system which integrated a total of 450 GB of data from several different data sources. The discovered social network contains 4,906,460 nodes (individuals) and 211,403,212 edges. Our experiments are based on 700 co-offenders involved in robbery crimes. The experimental results show that most strong ties are formed with k ≥ 2.
acm symposium on applied computing | 2005
Huanjing Wang; Allen S. Parrish; Randy K. Smith; Susan V. Vrbsky
Variable ranking and feature selection are important concepts in data mining and machine learning. This paper introduces a new variable ranking technique named Sum Max Gain Ratio (SMGR). The new technique is evaluated within the domain of traffic accident data and against a more generalized dataset. In certain cases, SMGR is empirically shown to provide similar results to established approaches with significantly better runtime performance.
frontiers in education conference | 1997
David Cordes; Allen S. Parrish; Brandon Dixon; Richard B. Borie; Jeff Jackson; Patrick T. Gaughan
The University of Alabama has developed an integrated first-year curriculum for engineering students consisting primarily of an integrated block of mathematics, physics, chemistry, and engineering design. While this curriculum is highly appropriate (and successful) for most engineering majors, it does not meet the needs of a computer engineering (or computer science) major nearly as well. Recognizing this, the Departments of Computer Science and Electrical and Computer Engineering received funding under NSFs Course and Curriculum Development Program to generate an integrated introduction to the discipline of computing. The revised curriculum provides a five-hour block of instruction (each semester) in computer hardware, software development, and discrete mathematics. At the end of this three-semester sequence, students will have completed the equivalent of CS I and CS II, a digital logic course, an introductory sequence in computer organization and assembly language, and a discrete mathematics course. The revised curriculum presents these same materials in an integrated block of instruction. As one simple example, the instruction of basic data types in the software course (encountered early in the freshman year) is accompanied by machine representation of numbers (signed binary, one and twos complement) in the hardware course, and by arithmetic in different bases in the discrete mathematics course. It also integrates cleanly with the Foundation Coalitions freshman year, and provides a block of instruction that focuses directly upon the discipline of computing.