Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barton P. Miller is active.

Publication


Featured researches published by Barton P. Miller.


IEEE Computer | 1995

The Paradyn parallel performance measurement tool

Barton P. Miller; M.D. Callaghan; Jonathan M. Cargille; Jeffrey K. Hollingsworth; R.B. Irvin; Karen L. Karavanic; Krishna Kunchithapadam; Tia Newhall

Paradyn is a tool for measuring the performance of large-scale parallel programs. Our goal in designing a new performance tool was to provide detailed, flexible performance information without incurring the space (and time) overhead typically associated with trace-based tools. Paradyn achieves this goal by dynamically instrumenting the application and automatically controlling this instrumentation in search of performance problems. Dynamic instrumentation lets us defer insertion until the moment it is needed (and remove it when it is no longer needed); Paradyns Performance Consultant decides when and where to insert instrumentation. >


Communications of The ACM | 1990

An empirical study of the reliability of UNIX utilities

Barton P. Miller; Louis Fredriksen; Bryan So

The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.


ACM Letters on Programming Languages and Systems | 1992

What are race conditions?: Some issues and formalizations

Robert H. B. Netzer; Barton P. Miller

In shared-memory parallel programs that use explicit synchronization, race conditions result when accesses to shared memory are not properly synchronized. Race conditions are often considered to be manifestations of bugs, since their presence can cause the program to behave unexpectedly. Unfortunately, there has been little agreement in the literature as to precisely what constitutes a race condition. Two different notions have been implicitly considered: one pertaining to programs intended to be deterministic (which we call general races) and the other to nondeterministic programs containing critical sections (which we call data races). However, the differences between general races and data races have not yet been recognized. This paper examines these differences by characterizing races using a formal model and exploring their properties. We show that two variations of each type of race exist: feasible general races and data races capture the intuitive notions desired for debugging and apparent races capture less accurate notions implicitly assumed by most dynamic race detection methods. We also show that locating feasible races is an NP-hard problem, implying that only the apparent races, which are approximations to feasible races, can be detected in practice. The complexity of dynamically locating apparent races depends on the type of synchronization used by the program. Apparent races can be exhaustively located efficiently only for weak types of synchronization that are incapable of implementing mutual exclusion. This result has important implications since we argue that debugging general races requires exhaustive race detection and is inherently harder than debugging data races (which requires only partial race detection). Programs containing data races can therefore be efficiently debugged by locating certain easily identifiable races. In contrast, programs containing general races require more complex debugging techniques.


IEEE Transactions on Parallel and Distributed Systems | 1990

IPS-2: the second generation of a parallel program measurement system

Barton P. Miller; Morgan Clark; Jeffrey K. Hollingsworth; Steven Kierstead; Sek-See Lim; Timothy Torzewski

IPS, a performance measurement system for parallel and distributed programs, is currently running on its second implementation. IPSs model of parallel programs uses knowledge about the semantics of a programs structure to provide two important features. First, IPS provides a large amount of performance data about the execution of a parallel program, and this information is organized so that access to it is easy and intuitive. Secondly, IPS provides performance analysis techniques that help to guide the programmer automatically to the location of program bottlenecks. The first implementation of IPS was a testbed for the basic design concepts, providing experience with a hierarchical program and measurement model, interactive program analysis, and automatic guidance techniques. It was built on the Charlotte distributed operating system. The second implementation, IPS-2, extends the basic system with new instrumentation techniques, an interactive and graphical user interface, and new automatic guidance analysis techniques. This implementation runs on 4.3BSD UNIX systems, on the VAX, DECstation, Sun 4, and Sequent Symmetry multiprocessor. >


conference on high performance computing (supercomputing) | 2003

MRNet: A Software-Based Multicast/Reduction Network for Scalable Tools

Philip C. Roth; Dorian C. Arnold; Barton P. Miller

We present MRNet, a software-based multicast/reduction network for building scalable performance and system administration tools. MRNet supports multiple simultaneous, asynchronous collective communication operations. MRNet is flexible, allowing tool builders to tailor its process network topology to suit their tools requirements and the underlying systems capabilities. MRNet is extensible, allowing tool builders to incorporate custom data reductions to augment its collection of built-in reductions. We evaluated MRNet in a simple test tool and also integrated into an existing, real-world performance tool with up to 512 tool back-ends. In the real-world tool, we used MRNet not only for multicast and simple data reductions but also with custom histogram and clock skew detection reductions. In our experiments, the MRNet-based tools showed significantly better performance than the tools without MRNet for average message latency and throughput, overall tool start-up latency, and performance data processing throughput.


ACM Transactions on Programming Languages and Systems | 1991

Techniques for debugging parallel programs with flowback analysis

Jong-Deok Choi; Barton P. Miller; Robert H. B. Netzer

Flowback analysis is a powerful technique for debugging programs. It allows the programmer to examine dynamic dependences in a program’s execution history without having to re-execute the program. The goal is to present to the programmer a graphical view of the dynamic program dependences. We are building a system, called PPD, that performs flowback analysis while keeping the execution time overhead low. We also extend the semantics of flowback analysis to parallel programs. This paper describes details of the graphs and algorithms needed to implement efficient flowback analysis for parallel programs. Execution time overhead is kept low by recording only a small amount of trace during a program’s execution. We use semantic analysis and a technique called incremental tracing to keep the time and space overhead low. As part of the semantic analysis, PPD uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. Parallel programs have been accommodated in two ways. First, the flowback dependences can span process boundaries; i.e., the most recent modification to a variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program. Second, our algorithms will detect potential data race conditions in the access to shared variables. The programmer can be directed to the cause of the race condition. PPD is currently being implemented for the C programming language on a Sequent Symmetry shared-memory multiprocessor. Index Items − debugging, parallel program, flowback analysis, incremental tracing, semantic analysis, program dependence graph. hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh Research supported in part by National Science Foundation grants CCR-8703373 and CCR-8815928, Office of Naval Research Contract N00014-89-J-1222, and a Digital Equipment Corporation External Research Grant. TR 786 / To appear in ACM Trans. on Programming Languages and Systems


programming language design and implementation | 1988

A mechanism for efficient debugging of parallel programs

Barton P. Miller; Jong-Deok Choi

This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). We describe the use of flowback analysis to provide information on causal relationships between events in a programs execution without re-executing the program for debugging. We introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. We extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions of the co-operating processes.


acm/ieee international conference on mobile computing and networking | 2002

Reliable network connections

Victor C. Zandy; Barton P. Miller

We present two systems, reliable sockets (rocks) and reliable packets (racks), that provide transparent network connection mobility using only user- level mechanisms. Each system can detect a connection failure within seconds of its occurrence, preserve the endpoint of a failed connection in a suspended state for an arbitrary period of time, and automatically reconnect, even when one end of the connection changes IP address, with correct recovery of in-flight data. To allow rocks and racks to interoperate with ordinary clients and servers, we introduce a general user-level Enhancement Detection Protocol that enables the remote detection of rocks and racks, or any other socket enhancement system, but does not affect applications that use ordinary sockets. Rocks and racks provide the same functionality but have different implementation models: rocks intercept and modify the behavior of the sockets API by using an interposed library, while racks uses a packet filter to intercept and modify the packets exchanged over a connection. Racks and rocks introduce small throughput and latency overheads that we deem acceptable for the level of mobility and reliability they provide.


acm sigplan symposium on principles and practice of parallel programming | 1991

Improving the accuracy of data race detection

Robert H. B. Netzer; Barton P. Miller

For shared-memory parallel programs that use explicit synchronization, data race detection is an important part of debugging. A data race exists when concurrently executing sections of code access common shared variables. In programs intended to be data race free, they are sources of nondeterminism usually considered bugs. Previous methods for detecting data races in executions of parallel programs can determine when races occurred, but can report many data races that are artifacts of others and not direct manifestations of program bugs. Artifacts exist because some races can cause others and can also make false races appear real. Such artifacts can overwhelm the programmer with information irrelevant for debugging. This paper presents results showing how to identify nonartifact data races by validation and ordering. Data race validation attempts to determine which races involve events that either did execute concurrently or could have (called feasible data races). We show how each detected race can either be guaranteed feasible, or when insufficient information is available, sets of races can be identified within which at least one is guaranteed feasible. Data race ordering attempts to identify races that did not occur only as a result of others. Data races can be partitioned so that it is known whether a race in one partition may have affected a race in another. The first partitions are guaranteed to contain at least one feasible data race that is not an artifact of any kind. By combining validation and ordering, the programmer can be directed to those data races that should be investigated first for debugging. hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh Research supported in part by National Science Foundation grant CCR-8815928, Office of Naval Research grant N00014-89-J-1222, and a Digital Equipment Corporation External Research Grant. To appear in Proc. of the Third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Williamsburg, VA, April 1991.


ieee symposium on security and privacy | 2004

Formalizing sensitivity in static analysis for intrusion detection

Henry Hanping Feng; Jonathon T. Giffin; Yong Huang; Somesh Jha; Wenke Lee; Barton P. Miller

A key function of a host-based intrusion detection system is to monitor program execution. Models constructed using static analysis have the highly desirable feature that they do not produce false alarms; however, they may still miss attacks. Prior work has shown a trade-off between efficiency and precision. In particular, the more accurate models based upon pushdown automata (PDA) are very inefficient to operate due to non-determinism in stack activity. In this paper, we present techniques for determinizing PDA models. We first provide a formal analysis framework of PDA models and introduce the concepts of determinism and stack-determinism. We then present the VP-Static model, which achieves determinism by extracting information about stack activity of the program, and the Dyck model, which achieves stack-determinism by transforming the program and inserting code to expose program state. Our results show that in run-time monitoring, our models slow execution of our test programs by 1% to 135%. This shows that reasonable efficiency needs not be sacrificed for model precision. We also compare the two models and discover that deterministic PDA are more efficient, although stack-deterministic PDA require less memory.

Collaboration


Dive into the Barton P. Miller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Somesh Jha

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Martin Schulz

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong H. Ahn

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gregory L. Lee

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Philip C. Roth

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew R. Bernat

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge