Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James D. Kiper is active.

Publication


Featured researches published by James D. Kiper.


ACM Transactions on Software Engineering and Methodology | 1992

Structural testing of rule-based expert systems

James D. Kiper

Testing of rule-based expert systems has become a high priority for many organizations as the use of such systems proliferates. Traditional software teting techniques apply to some components of rule-based systems, e.g., the inference engine. However, to structurally test the rule base component requires new techniques or adaptations of existing ones. This paper describes one such adaptation: an extension of data flow path selection in which a graphical representation of a rule base is defined and evaluated. This graphical form, called a logical path graph, captures logical paths through a rule base. These logical paths create precisely the abstractions needed in the testing process. An algorithm for the construction of logical path graphs are analyzed.


2006 First International Workshop on Requirements Engineering Visualization (REV'06 - RE'06 Workshop) | 2006

Experiences using Visualization Techniques to Present Requirements, Risks to Them, and Options for Risk Mitigation

Martin S. Feather; Steven L. Cornford; James D. Kiper; Tim Menzies

For several years we have been employing a riskbased decision process to guide development and application of advanced technologies, and for research and technology portfolio planning. The process is supported by custom software, in which visualization plays an important role. During requirements gathering, visualization is used to help scrutinize the status (completeness, extent) of the information. During decision making based on the gathered information, visualization is used to help decisionmakers understand the space of options and their consequences. In this paper we summarize the visualization capabilities that we have employed, indicating when and how they have proven useful.


Journal of Visual Languages and Computing | 1997

Criteria for Evaluation of Visual Programming Languages

James D. Kiper; Elizabeth V. Howard; Chuck Ames

Interest in visual programming languages has increased as graphic support in hardware and software has made display and manipulation of visual images, icons, diagrams, and forms reasonable to consider. In this paper, we present a set of evaluation criteria and associated metrics to judge visual programming languages. The five criteria, visual nature, functionality, ease of comprehension, paradigm support, and scalability, are intended to capture the essence of a general purpose visual programming language. These criteria are supplemented with a set of subjective metrics, resulting in an evaluation method that can be used to assess the quality of an individual visual programming language, or to compare among elements of a set of such languages.


IEEE Software | 2008

A Broad, Quantitative Model for Making Early Requirements Decisions

Martin S. Feather; Steven L. Cornford; Kenneth A. Hicks; James D. Kiper; Tim Menzies

Although detailed information is typically scarce during a projects early phases, developers frequently need to make key decisions about trade-offs among quality requirements. Developers in many fields-including systems, hardware, and software engineering-routinely make such decisions on the basis of a shallow of the situation or on past experience, which might be irrelevant to the current a consequence, developers can get locked into what is ultimately an inferior design or pay a significant price to reverse such earlier decisions later in the process. By coarsely quantifying relevant factors, a risk-assessment model helps hardware and software engineers make trade-offs among quality requirements early in development.


Annals of Software Engineering | 2003

Condensing Uncertainty via Incremental Treatment Learning

Tim Menzies; Eliza Chiang; Martin S. Feather; Ying Hu; James D. Kiper

Models constrain the range of possible behaviors defined for a domain. When parts of a model are uncertain, the possible behaviors may be a data cloud: i.e. an overwhelming range of possibilities that bewilder an analyst. Faced with large data clouds, it is hard to demonstrate that any particular decision leads to a particular outcome. Even if we can’t make definite decisions from such models, it is possible to find decisions that reduce the variance of values within a data cloud. Also, it is possible to change the range of these future behaviors such that the cloud condenses to some improved mode. Our approach uses two tools. Firstly, a model simulator is constructed that knows the range of possible values for uncertain parameters. Secondly, the TAR2 treatment learner uses the output from the simulator to incrementally learn better constraints. In our incremental treatment learning cycle, users review newly discovered treatments before they are added to a growing pool of constraints used by the model simulator.


international workshop on software specification and design | 2000

Design and Development Assessment

Steven L. Cornford; Martin S. Feather; John C. Kelly; Timothy W. Larson; Burton Sigal; James D. Kiper

An assessment methodology is described and illustrated. This methodology separates assessment into the following phases: (1) elicitation of requirements; (2) elicitation of failure modes and their impact (risk of loss of requirements); (3) elicitation of failure mode mitigations and their effectiveness (degree of reduction of failure modes); and (4) calculation of outstanding risk taking the mitigations into account. This methodology, with accompanying tool support, has been applied to assist in planning the engineering development of advanced technologies. Design assessment features prominently in these applications. The overall approach is also applicable to development assessment (of the development process to be followed to implement the design). Both design and development assessments are demonstrated on hypothetical scenarios based on the workshops TRMCS case study. TRMCS information has been entered into the assessment support tool, and serves as illustration throughout.


Empirical Software Engineering | 1997

Visual Depiction of Decision Statements: What is Best forProgrammers and Non-Programmers?

James D. Kiper; Brent Auernheimer; Charles K. Ames

This paper reports the results of two experiments investigating differences in comprehensibility of textual and graphical notations for representing decision statements. The first experiment was a replication of a prior experiment that found textual notations to be better than particular graphical notations. After replicating this study, two other hypotheses were investigated in a second experiment. Our first claim is that graphics may be better for technical, non-programmers than they are for programmers because of the great amount of experience that programmers have with textual notations in programming languages. The second is that modifications to graphical forms may improve their usefulness. The results support both of these hypotheses.


automated software engineering | 2001

Better reasoning about software engineering activities

Tim Menzies; James D. Kiper

Software management oracles often contain numerous subjective features. At each subjective point, a range of behaviors is possible. Stochastic simulation samples a subset of the possible behaviors. After many such stochastic simulations, the TAR2 treatment learner can find control actions that have (usually) the same impact despite the subjectivity of the oracle.


conference on software engineering education and training | 2000

Technology transfer issues for formal methods of software specification

Ken Abernethy; John C. Kelly; Ann E. Kelley Sobel; James D. Kiper; John D. Powell

Accurate and complete requirements specifications are crucial for the design and implementation of high-quality software. Unfortunately, the articulation and verification of software system requirements remains one of the most difficult and error-prone tasks in the software development lifecycle. The use of formal methods, based on mathematical logic and discrete mathematics, holds promise for improving the reliability of requirements articulation and modeling. However, formal modeling and reasoning about requirements has not typically been a part of the software analysts education and training, and because the learning curve for the use of these methods is nontrivial, adoption of formal methods has proceeded slowly. As a consequence, technology transfer is a significant issue in the use of formal methods. In this paper, several efforts undertaken at NASA aimed at increasing the accessibility of formal methods are described. These include the production of the following: two NASA guidebooks on the concepts and applications of formal methods, a body of case studies in the application of formal methods to the specification of requirements for actual NASA projects, and course materials for a professional development course introducing formal methods and their application to the analysis and design of software-intensive systems. In addition, efforts undertaken at two universities to integrate instruction on formal methods based on these NASA materials into the computer science and software engineering curricula are described.


Numeracy | 2011

Development of an Assessment of Quantitative Literacy for Miami University

Rose Marie Ward; Monica C. Schneider; James D. Kiper

Quantitative Literacy is a competence as important as general literacy; yet, while writing requirements are seemingly ubiquitous across the college curriculum, quantitative literacy requirements are not. The current project provides preliminary evidence of the reliability and validity of a quantitative literacy measure suitable for delivery online. A sample of 188 undergraduate students from Miami University, a midsize university in the midwestern U.S., participated in the current study. Scores on the measure, were inversely related to statistical/mathematical anxiety measures, directly related to subjective assessment of numeracy, and did not differ across gender or year in school. The resulting measure provides a reasonable tool and method of assessing quantitative literacy at a midsize university.

Collaboration


Dive into the James D. Kiper's collaboration.

Top Co-Authors

Avatar

Martin S. Feather

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tim Menzies

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Steven L. Cornford

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brent Auernheimer

California State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gursimran S. Walia

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

John C. Kelly

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles K. Ames

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge