Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keith W. Miller is active.

Publication


Featured researches published by Keith W. Miller.


IEEE Software | 1995

Software testability: the new verification

Jeffrey M. Voas; Keith W. Miller

Most verification is concerned with finding incorrect code. Instead, this view looks at the probability that the code will fail if it is faulty. The authors present the benefits of their approach, describe how to design for it, and show how to measure testability through sensitivity analysis. >


It Professional | 2012

BYOD: Security and Privacy Considerations

Keith W. Miller; Jeffrey M. Voas; George F. Hurlburt

Clearly, there are several important advantages for employees and employers when employees bring their own devices to work. But there are also significant concerns about security privacy. Companies and individuals involved, or thinking about getting involved with BYOD should think carefully about the risks as well as the rewards.


IEEE Software | 1991

Predicting where faults can hide from testing

Jeffrey Voas; Larry J. Morell; Keith W. Miller

Sensitivity analysis, which estimates the probability that a program location can hide a failure-causing fault, is addressed. The concept of sensitivity is discussed, and a fault/failure model that accounts for fault location is presented. Sensitivity analysis requires that every location be analyzed for three properties: the probability of execution occurring, the probability of infection occurring, and the probability of propagation occurring. One type of analysis is required to handle each part of the fault/failure model. Each of these analyses is examined, and the interpretation of the resulting three sets of probability estimates for each location is discussed. The relationship of the approach to testability is considered.<<ETX>>


Communications of The ACM | 1997

Software engineering code of ethics

Don Gotterbarn; Keith W. Miller; Simon Rogerson

T he Board of Governors of the IEEE Computer Society established a steering committee in May 1993 for evaluating, planning, and coordinating actions related to establishing software engineering as a profession. In that same year the ACM Council endorsed the establishment of a Commission on Software Engineering. By January 1994, both societies formed a joint steering committee “to establish the appropriate set(s) of standards for professional practice of software engineering upon which industrial decisions, professional certification, and educational curricula can be based.” To accomplish these tasks they made the following recommendations: ACM and the IEEE Computer Society join forces to create a code of professional practices within our industry. Now, we ask for your comments.


Communications of The ACM | 1996

Implementing a tenth strand in the CS curriculum

C. Dianne Martin; Chuck Huff; Don Gotterbarn; Keith W. Miller

75 Computer Science Education should not drive a wedge between the social and the technical, but rather link both through the formal and informal curriculum [5]. Societal and technical aspects of computing are interdependent. Technical issues are best understood (and most effectively taught) in their social context, and the societal aspects of computing are best understood in the context of the underlying technical detail. Far from detracting from the students learning of technical information, including societal aspects in the computer science curriculum can enhance students learning, increase their motivation, and deepen their understanding [10]. social responsibility are becoming increasingly important aspects of the computing profession. This is demonstrated by the publication of a number of recent articles in Communications related to these topics [2, 4, 5, 8, 13–15]. It is also demonstrated in the most recent revision to the computer science curriculum, Computing Curricula 1991 [1], which stated, Undergraduate programs should provide an environment in which students are exposed to the ethical and societal issues associated with the computing field. This includes maintaining currency with recent technological and theoretical developments, upholding general professional standards, and The second report from Project ImpactCS is given here, and a new required area of study—ethics and social impact—is proposed.


international symposium on software reliability engineering | 1992

Improving the software development process using testability research

Jeffrey M. Voas; Keith W. Miller

Software testability is the tendency of code to reveal existing faults during random testing. This paper proposes to take software testability predictions into account throughout the development process. These predictions can be made from formal specifications, design documents, and the code itself. The insight provided by software testability is valuable during design, coding, testing and quality assurance. The authors believe that software testability analysis can play a crucial role in quantifying the likelihood that faults are not hiding, after the testing does not result in any failures for the current version.<<ETX>>


It Professional | 2010

Free and Open Source Software

Keith W. Miller; Jeffrey M. Voas; Tom Costello

In this paper, free and open source software are discussed. Open source is an intellectual property destroyer. Nothing could be worse than this for the software business and the intellectual-property business. Microsoft has an official open source presence on the Web (www.microsoft.com/opensource), and in July 2010, Jean Paoli, the General Manager for Interoperability Strategy at Microsoft, delivered a keynote address at the OReilly Open Source Convention.


international symposium on software reliability engineering | 1994

Putting assertions in their place

Jeffrey M. Voas; Keith W. Miller

Assertions that are placed at each statement in a program can automatically monitor the internal computations of a program execution. However, the advantages of universal assertions come at a cost. A program with such extensive internal instrumentation will be slower than the same program without the instrumentation. Some of the assertions may be redundant. The task of instrumenting the code with correct assertions at each location is burdensome, and there is no guarantee that the assertions themselves will be correct. We advocate a middle ground between no assertions at all (the most common practice) and the theoretical ideal of assertions at every location. Our compromise is to place assertions only at locations where traditional testing is unlikely to uncover software faults. One type of testability measurement, sensitivity analysis, identifies locations where testing is unlikely to be effective.<<ETX>>


Ethics and Information Technology | 2008

The ethics of designing artificial agents

Frances S. Grodzinsky; Keith W. Miller; Marty J. Wolf

In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.


Ethics and Information Technology | 2008

Un-making artificial moral agents

Deborah G. Johnson; Keith W. Miller

Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.

Collaboration


Dive into the Keith W. Miller's collaboration.

Top Co-Authors

Avatar

Jeffrey M. Voas

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Marty J. Wolf

Bemidji State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Don Gotterbarn

East Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deborah G. Johnson

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Phillip A. Laplante

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Dianne Martin

George Washington University

View shared research outputs
Top Co-Authors

Avatar

David K. Larson

University of Illinois at Springfield

View shared research outputs
Researchain Logo
Decentralizing Knowledge